Tuesday, August 31, 2021

AUSTRALIAN SCIENTISTS HELP CATCH THE FIRST MOMENTS OF A SUPERNOVA

Astronomers from the Australian National University, have led an international team of researchers to observe, for the first time, the early light curve from a supernova event and modelled the type of progenitor star that caused it.

Illustration of a star that is ripping apart in several regions and gaseous materials are coming out of these regions.
The initial shock breakout of light just as the supernova occurs on a star. Credit: NASA.

Astronomers from the Australian National University (ANU) have led an international team of researchers to make the first observations of the light emitted just as a supernova explosion detonates in space. The data has also provided an opportunity to test different models in which the progenitor star can be inferred through measuring the supernova’s light curve.

Until now, most of the light received from supernovae events was the result of astronomers making detections after the initial blast and produced by the decay of radioactive elements in the expanding debris shell from the blast, which usually occurs sometime after the original event.

But this new research, published in the journal Monthly Notices of the Royal Astronomical Society, outlines the detection of a light curve peak (known as a ‘shock cooling light-curve’)  which presents the burst of emissions immediately after the explosion occurs – a rare observation, as these events fade quickly. The new discovery has been named SN2017jgh.

"This is the first time anyone has had such a detailed look at a complete shock cooling curve in any supernova," said PhD scholar and lead author, Mr Patrick Armstrong.

"Because the initial stage of a supernova happens so quickly, it is very hard for most telescopes to record this phenomenon.”

"Until now, the data we had was incomplete and only included the dimming of the shock cooling curve and the subsequent explosion, but never the bright burst of light at the very start of the supernova.

"This major discovery will give us the data we need to identify other stars that became supernovae, even after they have exploded," he said.

Supernovae events can be triggered by a number of different factors, including merging compact stars like white dwarfs or the collapse of more massive stars. So violent and powerful are these events, that they forge a large number of elements that we see around us including many of the heavier elements in the periodic table.

By studying the lights from these events, effectively astronomers are given a tool to delve into the formation of elements across the Universe. Additionally, by studying the shock cooling light-curves that is produced during supernovae, astronomers can now also start to answer some of the ongoing questions around the dynamics of collapsing stellar objects once they reach the end of their main sequence lives.

THE STAR THAT WAS…

Large yellow star with flares and prominences
Artist illustration of a yellow supergiant star. Credit: M. Jadraef.

Based on the observations made of the shock cooling light-curves, and modelling completed, the research team were able to determine that the progenitor star to SN2017jgh was a yellow supergiant star. These types of stars are usually evolved F or G class stars that are no longer burning hydrogen in their cores, so have since expanded to enormous sizes – which in turn increases their luminosity.

Yellow supergiants usually have a temperature range of around 4,000 – 7,000 Kelvin, with their luminosities shining from about 1,000 times that of our Sun, but in the most extreme cases, this can also get up to 100,000 times the solar luminosity.

SN2017jgh progenitor star was also identified in imagery, prior to the explosion, and determined to have had an effective temperature somewhere between 4000 – 5000 Kelvin and contain an original mass of 17 times that of our Sun.

However, yellow supergiants are less common than the regular red supergiants that are dominant in the night sky, like Antares and Betelgeuse, and are usually smaller in size. The northern pole star, Polaris, is catalogued as a yellow supergiant.

SN2017jgh also features a surrounding envelope of hydrogen gas whose mass is expected to be ranged between 0.5 - 1.7 times that of the Sun. These envelopes form as the star ages and throw off enormous volumes of hydrogen into their local surrounding regions through expected mechanisms like strong stellar winds, stellar rotation, binary interactions and nuclear instabilities.

The size of the hydrogen envelope from SN2017jgh is reported in the paper to have a radius of approximately 130 solar radii, which equates to about 180 million kilometres – so if you placed it where our Sun was, its surface would lie beyond the Earth’s orbit. 

The supernovae event was located 0.157 arcseconds away from its host galaxy centre, which resides at a distance of just over one billion light-years from Earth.

SUPERNOVAE IN A RANGE OF FLAVOURS

Infographic that shows the two types of supernovae events and the features they exhibit, such as elementary lines per supernovae type.
Classifying supernovae events into categories, based on how they exhibit hydrogen, silicon and helium features. Credit: H. Stevance.

Supernovae can be triggered by a number of different progenitor objects and events. The taxonomy of these events is divided into two main classes – those that feature hydrogen in the light curve’s spectrum, and those that don’t. From these two classes, further sub-classes are also established.

The first type, Type I supernovae, is considered thermal runaway events, and usually associated with massive compact objects like white dwarfs. These supernovae are usually triggered when accretion of material builds up on one star from a companion; an accretion of materials create enough pressure to trigger a core ignition, or when two compact objects merge (though in the case of Neutron Star mergers, these are known as a Kilonova).

But the other category, called Type II supernovae, is a much more powerful and destructive event. These are triggered when a massive star (usually 8 – 25 solar masses) can no longer produce core energy to sustain the outwards pushing radiation pressure and thus succumbs to the inward gravitational force.

This causes the star to collapse inwards, crushing the core before rebounding in the supernova explosion – this is also how exotic objects like neutron stars, pulsars and black holes are born.

Type II Supernovae can be further sub-categorised depending again on a number of different factors presented in the light curve observed (esp. if it does or doesn’t feature silicon, helium, narrow lines, or an evolving spectrum).

One indicator that the progenitor star had a large hydrogen envelope surrounding it is the fading of hydrogen lines weeks after the initial explosion, giving way to the rise of dominant helium lines, suggesting that a lot of the hydrogen layer of the star had become stripped during the envelope shedding.

READING THE LIGHT CURVE OF AN EXPLODING STAR

When it comes to the Type II supernovae (core-collapse models), there are generally two prominent peaks that appear in the light curve signature. The first is created when photons that have been trapped inside the star rush outwards in the early onset of the violent explosion, and only last a few days. These emissions can provide astronomers with lots of information about the progenitor star, and the shockwave generated from the explosion.

The second, are caused by the nuclear-powered emissions from the radioactive decay of 56Ni into cobalt, then iron over the course of some time, usually coming in a few days and weeks after the event. This is the source of the second peak in the light curve, but also eventually reduces in luminosity over the period. It is during these times in which new elements are forged through nucleosynthesis associated with the stellar explosion.

Historically, a number of shock cooling light-curves have been observed as reported for other supernovae events, but these new findings have allowed astronomers to capture the complete evolution of the initial peak associated with a supernova for the first time.

Whilst many supernovae occur and can be studied, catching them in their early onset is the only time where these shock cooling light-curves can be observed, and so having telescopes pointed at the right place and the right time is a rare chance.

Infographic that shows the different elements in colour coding, shading regions of the spectrum for several different supernovae types.
Infographic on what the different light curves of supernovae look like - note the difference in the shape of the curves, as well as the elementary composition. The two main categories of supernovae events can be classed under thermonuclear and core-collapse models. Credit: H. Stevance.

ANALYSIS AND MODELLING OF THE LIGHT FROM 2017JGH

Artist rendition of the Kepler spacecraft in orbit with a bright Sun in the background and a small blue Earth off in the distance.
The Kepler Space Telescope, which assisted in obtaining data from this discovery. Credit: NASA.

A number of different observations were made to come to establish the results outlined in this latest paper, relating to SN2017jgh. Originally, the supernova was discovered by Pan-STARRS1 – a 1.8-metre telescope located in Maui, Hawaii. Using the 1.4 Gigapixel camera, it identified the supernova which presented at roughly magnitude 20.

Photometry (measuring light in different bands in similar wavelengths that the humane eye observes in) was also produced using Pan-STARRS1 filtering system (grizy), and the Swope Supernova Survey (SSS) – which uses a 1-metre aperture telescope, located in Las Campanas in Chile also complemented with its own observations. The SSS telescope’s filtering system (gri) observed the supernova between December 2017 and February 2018.

As well as ground-based observations, the Kepler/K2 spacecraft took observations of the event from orbit, avoiding any disturbances produced by our atmosphere. It observed the event at 30 minutes cadence over an 80-day campaign, which really highlighted the rise in the light curve’s first peak in lots of detail.

For the optical spectroscopy component, the Gemini Multi-Object Spectrograph on the Gemini South Telescope (also located in Chile) was used and took in observations of the spectrum in early January 2018, two days prior to the radioactive maximum peak which occurs roughly 14 days after discovery.

Overall, the light curves that were analysed across all observations point to a similar supernova to another observed in the early 1990s, known as SN1993J, which also featured a yellow supergiant star.

A number of shock cooling light-curve models were then used to test the results, with the SW 17 model fitting the most accurate to the data observed.

"We've proven one model works better than the rest at identifying different supernovae stars and there is no longer a need to test multiple other models, which has traditionally been the case," said Astrophysicist and ANU researcher Dr Brad Tucker, also a co-author of the paper.

"Astronomers across the world will be able to use SW 17 and be confident it is the best model to identify stars that turn into supernovas."

As well as providing global researchers with a well-fitted model for these early peaks in supernovae events, these new findings now showcase a little bit more of the detail around those first, early moments during one of the most violent and destructive incidents in our Universe, which in turn gives birth to new materials.

"This will provide us with further opportunities to improve our models and build our understanding of supernovae and where the elements that make up the world around us come from," said Mr Armstrong.

New mathematical solutions to an old problem in astronomy

astronomy
Credit: CC0 Public Domain

For millennia, humanity has observed the changing phases of the Moon. The rise and fall of sunlight reflected off the Moon, as it presents its different faces to us, is known as a "phase curve". Measuring phase curves of the Moon and Solar System planets is an ancient branch of astronomy that goes back at least a century. The shapes of these phase curves encode information on the surfaces and atmospheres of these celestial bodies. In modern times, astronomers have measured the phase curves of exoplanets using space telescopes such as Hubble, Spitzer, TESS and CHEOPS. These observations are compared with theoretical predictions. In order to do so, one needs a way of calculating these phase curves. It involves seeking a solution to a difficult mathematical problem concerning the physics of radiation.

Approaches for the calculation of phase curves have existed since the 18th century. The oldest of these solutions goes back to the Swiss mathematician, physicist and astronomer, Johann Heinrich Lambert, who lived in the 18th century. "Lambert's law of reflection" is attributed to him. The problem of calculating reflected light from Solar System planets was posed by the American astronomer Henry Norris Russell in an influential 1916 paper. Another well-known 1981 solution is attributed to the American lunar scientist Bruce Hapke, who built on the classic work of the Indian-American Nobel laureate Subrahmanyan Chandrasekhar in 1960. Hapke pioneered the study of the Moon using mathematical solutions of phase curves. The Soviet physicist Viktor Sobolev also made important contributions to the study of reflected light from celestial bodies in his influential 1975 textbook. Inspired by the work of these scientists, theoretical astrophysicist Kevin Heng of the Center for Space and Habitability CSH at the University of Bern has discovered an entire family of new mathematical solutions for calculating phase curves. The paper, authored by Kevin Heng in collaboration with Brett Morris from the National Center of Competence in Research NCCR PlanetS—which the University of Bern manages together with the University of Geneva—and Daniel Kitzmann from the CSH, has just been published in Nature Astronomy.

Generally applicable solutions

"I was fortunate that this rich body of work had already been done by these great scientists. Hapke had discovered a simpler way to write down the classic solution of Chandrasekhar, who famously solved the radiative transfer equation for isotropic scattering. Sobolev had realised that one can study the problem in at least two mathematical coordinate systems." Sara Seager brought the problem to Heng's attention by her summary of it in her 2010 textbook.

By combining these insights, Heng was able to write down mathematical solutions for the strength of reflection (the albedo) and the shape of the phase curve, both completely on paper and without resorting to a computer. "The ground-breaking aspect of these solutions is that they are valid for any law of reflection, which means they can be used in very general ways. The defining moment came for me when I compared these pen-and-paper calculations to what other researchers had done using computer calculations. I was blown away by how well they matched," said Heng.

Successful analysis of the phase curve of Jupiter

"What excites me is not just the discovery of new theory, but also its major implications for interpreting data", says Heng. For example, the Cassini spacecraft measured phase curves of Jupiter in the early 2000s, but an in-depth analysis of the data had not previously been done, probably because the calculations were too computationally expensive. With this new family of solutions, Heng was able to analyze the Cassini phase curves and infer that the atmosphere of Jupiter is filled with clouds made up of large, irregular particles of different sizes. This parallel study has just been published by the Astrophysical Journal Letters, in collaboration with Cassini data expert and planetary scientist Liming Li of Houston University in Texas, U.S.A.

Credit: University of Bern

New possibilities for the analysis of data from space telescopes

"The ability to write down mathematical solutions for phase curves of reflected light on paper means that one can use them to analyze data in seconds," said Heng. It opens up new ways of interpreting data that were previously infeasible. Heng is collaborating with Pierre Auclair-Desrotour (formerly CSH, currently at Paris Observatory) to further generalize these mathematical solutions. "Pierre Auclair-Desrotour is a more talented applied mathematician than I am, and we promise exciting results in the near future," said Heng.

In the Nature Astronomy paper, Heng and his co-authors demonstrated a novel way of analyzing the phase curve of the exoplanet Kepler-7b from the Kepler space telescope. Brett Morris led the data analysis part of the paper. "Brett Morris leads the data analysis for the CHEOPS mission in my research group, and his modern data science approach was critical for successfully applying the mathematical solutions to real data," explained Heng. They are currently collaborating with scientists from the American-led TESS space telescope to analyze TESS phase curve data. Heng envisions that these new solutions will lead to novel ways of analyzing phase curve data from the upcoming, 10-billion-dollar James Webb Space Telescope, which is due to launch later in 2021. "What excites me most of all is that these mathematical solutions will remain valid long after I am gone, and will probably make their way into standard textbooks," said Heng.

Atmospheric chemistry on paper
More information: Heng, K. et al, Closed-formed solutions of geometric albedos and phase curves of exoplanets for any reflection law, Nature Astronomy (2021). DOI: doi.org/10.1038/s41550-021-01444-7
Kevin Heng et al, Jupiter as an Exoplanet: Insights from Cassini Phase Curves, The Astrophysical Journal Letters (2021). DOI: 10.3847/2041-8213/abe872
Journal information: Nature Astronomy  , Astrophysical Journal Letters 
Provided by University of Bern 

 

Woodside: Gulf Of Mexico Has Solid Growth Potential

The Gulf of Mexico offers some good growth opportunities thanks to the quality assets BHP operates there, Australia’s Woodside Petroleum’s chief executive Meg O’Neill told MarketWatch.

"One of the things that I think is really exciting about the merger is it does give us a substantially increased growth optionality when we look at the quality assets in the Gulf of Mexico, the quality assets we have here in Australia and then other opportunities in places like Trinidad, Tobago, Mexico and Senegal," she said in an interview.

BHP sold its oil business to the Australian major earlier this month in an all-stock merger deal.

“Merging Woodside with BHP’s oil and gas business delivers a stronger balance sheet, increased cash flow and enduring financial strength to fund planned developments in the near term and new energy sources into the future,” Woodside said in its statement at the time.

BHP’s assets in the Gulf of Mexico, according to O’Neill, were particularly valuable. The assets include operating stakes in two fields—Shenzi and Neptune—and non-operating interests in two other fields, Atlantis and Mad Dog. BHP recently approved a $544-million cash injection for the development of Shenzi North.

"We believe there is significant running room in those assets," O’Neill told MarketWatch.

The Shenzi field holds estimated recoverable reserves of 350 to 400 million barrels of oil equivalent and more in potential reserves that are also being targeted for exploitation, according to Offshore Technology. It has a production capacity of 100,000 bpd but exceeded that in the first year of production.

The Neptune field has reserves estimated at between 100 and 150 million barrels and the capacity to produce 50,000 bpd of crude oil.

Atlantis, operated by BP, is the third-largest oil field in the Gulf of Mexico. It has a production capacity of 200,000 bpd of crude oil. Mad Dog, another field operated by BP, holds between 200 and 450 million barrels of oil equivalent and can produce some 80,000 bpd.

By Irina Slav for Oilprice.com

 

Canadian Liberals Vow Stricter Emission Regulation On Oil Industry

The oil industry of Canada is facing more stringent regulation if the Liberals win the next election, according to a new platform published this weekend.

In it, the ruling party of PM Justin Trudeau wrote that it will work towards making the country’s oil and gas sector net-zero by 2050 by “Making sure the oil and gas sector reduces emissions from current levels at a pace and scale needed to achieve net-zero by 2050, with 5-year targets starting in 2025,” and by “Requiring oil and gas companies to reduce methane emissions by at least 75% below 2012 levels by 2030.”

The platform also states that the Liberals will work towards phasing out thermal coal exports and imports by 2030 as part of efforts to achieve a net—zero electricity grid by 2035. This goal will also involve setting a clean energy standard, adding more tax credits for low-carbon energy projects and setting up a Pan-Canadian Grid Council, whose purpose would be to “make Canada the most reliable, cost-effective and carbon-free electricity producer in the world.”

“A serious plan for the environment is a plan for the economy,” Justin Trudeau, leader of the Liberal Party, told Bloomberg. “We have done more to fight climate change and protect our environment than any other government in Canadian history.”

The plan paid out in the pre-election platform also includes more incentives for Canadians to buy electric vehicles with a view to having half of new car sales be EVs by 2030 and all new cars sold in the country are net-zero by 2035. To this end, the next Liberal government promises to extend an EV consumer rebate of $3,960 (C$5,000) to half a million Canadians, and building 50,000 new charging stations across the country.

The platform also envisages supporting measures for the Canadian oil and gas industry, featuring a $1.58-billion (C$2-billion) Futures Fund for Alberta, Saskachewan, and Newfoundland and Labrador, which the platform says will seek to ensure the energy transition is just and provide oil and gas workers with training “to succeed in the net-zero future.”

By Irina Slav for Oilprice.com

 

Can The U.S. Keep Its Wind Energy Boom Alive?

Wind energy holds enormous potential to generate carbon-free electricity around the world, and the energy industry finally seems to be catching on. Last year the United States broke records for wind energy installation, and it looks like the wind revolution is just getting started.

While current global wind power capacity is capable of generating just a fraction of the world's energy demand, wind power’s technical potential actually exceeds worldwide energy production. The technical potential of a renewable energy technology is the amount of energy generation that is theoretically achievable once system performance, topographic, environmental, and land-use constraints are accounted for. And even when taking all of these constraints into consideration, wind energy alone would be capable of filling the entire world’s energy needs. In order to actually make that happen, though, massive scaling of both on- and offshore wind farms would be necessary -- and that kind of scaling is not without its drawbacks.

Other than initial cost, which could be a barrier to entry but which is decreasing all the time thanks to technological improvements and economies of scale, large-scale wind projects pose potential negative environmental and social externalities. Wildlife, such as bird and bat collisions on-shore and marine life offshore, must be considered. In terms of social impact, wind farms alter landscapes, block views, and can cause potential radar interference. These negative impacts, however, pale in comparison to the benefits of wind power, not to mention the negative externalities of global warming.

According to the Intergovernmental Panel on Climate Change (IPCC), the energy used and greenhouse gases emitted in the life cycle of a wind turbine, from manufacturing to decommissioning, are puny in comparison to the energy generated and emissions mitigated over the apparatus’ lifetime. “the GHG emissions intensity of wind energy is estimated to range from 8 to 20 g CO2 /kWh in most instances, whereas energy payback times are between 3.4 to 8.5 months,” a 2018 report stated. 

In this light, the wind power revolution can’t come fast enough. Just this month, the United Nations and the IPCC sounded a “code red for humanity” which stated in no uncertain terms that we have reached the point of no return for climate change, and the global clean energy transition must be swift and absolute in order to avoid the worst impacts of global warming. Wind energy will have to be a considerable part of that front.

The technology is already being scaled at unprecedented rates. 2020 saw more wind energy capacity installed in the United States than any other year before, and in 2019 wind power surpassed hydropower to be the country’s top source of renewable energy in the same year that renewable energies overtook coal in the U.S. energy mix. This success story owes a lot to wind-friendly policy in the United States, where the federal government has been offering a tax credit to wind producers. That policy, however -- and subsidies in general -- has been controversial and the federal incentive was slated to end last year, resulting in a rush to expand production while the tax credit was still in place. 

“On the one hand, these government motivators have been good enough that the U.S. now has the third-highest per capita wind power generation in the world,” according to Marketplace. That’s a distant third, however, lagging far behind the global leaders, Denmark and Germany. Even after the massive expansion in 2020, the United States’ total wind energy capacity is just half that of China’s.  “On the other hand, we are a distant third — behind Denmark and Germany. The U.S. total capacity is half of China’s, and our volatile and cyclical policy of subsidies followed by subsidy cancellations is part of the reason why. While wind power is unequivocally a reliable, cost-effective, and efficient means of carbon-free energy production, its continued expansion is no guarantee without broad support. 

By Haley Zaremba for Oilprice.com

AHS announces funding to ‘stabilize’ EMS staffing; union says it doesn’t solve issues

By Adam Toy 770 CHQR
Posted August 30, 2021 6
An ambulance travels along 14 Street N.W. in Calgary in response to an emergency call. 

Alberta Health Services (AHS) is making more temporary EMS positions permanent thanks to $8.3 million in new funding from the province.


But the union representing paramedical professionals says it doesn’t add any new positions, putting the system at risk.

According to a release from AHS, 70 casual positions will be made into temporary full-time, and 30 full-time positions hired in Alberta’s two largest cities in 2019 will continue to be funded.


READ MORE: AHS transferring patients out of Grande Prairie hospital to free space for COVID-19 care

“This funding will help stabilize EMS staffing levels and ensure that we are able to respond to Albertans and also take care of our staff,” Dr. Verna Yiu, AHS president and CEO, said in a statement.

According to AHS, EMS call volumes have jumped by 50 per cent since the COVID-19 pandemic began in March 2020. The provincial health authority said effects from the pandemic, smoke-related calls, heat-related events, and a return to pre-pandemic activities have increased calls to an average of 1,521 per day from about 1,095.

READ MORE: Alberta nurses’ union seeks formal mediation, ‘one step closer to potential job action’

The announcement of the stabilization of 100 EMS positions was panned by the president of the Health Sciences Association of Alberta.

“While this funding is important to bolster and maintain 100 already existing positions, it doesn’t actually add a single paramedic to our overburdened health system,” Mike Parker said in a statement. “It doesn’t solve the issue of not having enough members hired.

“Every shift is being run short. Without hiring more new paramedics, the current government continues to put the system, our members, and every Albertan needing urgent medical care, at risk.”


In a statement, Health Minister Tyler Shandro said Monday’s announcement was a stop-gap measure.

“We need to do our best to support our paramedics and all healthcare workers now as we continue to see high demand on our healthcare services, and this decision by AHS should provide some tangible short term relief as we work on longer term solutions,” Shandro said.


Parker said more paramedics need to be hired by AHS immediately.

“This announcement does nothing to overcome the hiring crisis affecting emergency medical services, or AHS in general,” he said.
Too Much Meat During Ice Age Winters Gave Rise to Dogs, New Research Suggests

By George Dvorsky
1/07/21 11:23AM


A western gray wolf. Image: Jacob W. Frank (AP)



Two prevailing theories exist about the origin of domesticated dogs. One proposes that prehistoric humans used early dogs as hunting partners, and the other says that wolves were attracted to our garbage piles. New research suggests both theories are wrong and that the real reason has to do with our limited capacity to digest protein.

Dogs were domesticated from wild wolves during the last ice age between 14,000 and 29,000 years ago, and they were the first animals to be domesticated by humans. That humans and wolves should form a collaborative relationship is an odd result, given that both species are pack hunters who often target the same prey.

“The domestication of dogs has increased the success of both species to the point that dogs are now the most numerous carnivore on the planet,” wrote the authors of a new study published today in Scientific Reports. “How this mutually beneficial relationship emerged, and specifically how the potentially fierce competition between these two carnivores was ameliorated, needs to be explained.”

Indeed, given this context, it’s not immediately obvious why humans would want to keep wolves around. Moreover, the two prevailing theories about the origin of dogs—either as partners used for hunting or as self-domesticated animals attracted to our garbage—aren’t very convincing. Wolves, even when tamed, would’ve made for awful hunting partners, as they lacked the collaborative and advanced communication skills found in domesticated dogs. And sure, wild wolves were probably attracted to human scraps, but this would’ve required some unlikely interactions between humans and wolves.

“In our opinion, the self-domestication in this way is not fully explained,” Maria Lahtinen, a chemist and archaeologist at the Finnish Food Authority in Finland and the first author of the new study, said in an email. “Hunter-gatherers do not necessarily leave waste in the same place over and over again. And why would they tolerate a dangerous carnivore group in their close surroundings? Humans tend to kill their competitors and other carnivores.”

Lahtinen and her colleagues say there’s a more likely reason for the domestication of dogs, and it has to do with an abundance of protein during the harsh ice age winters, which subsequently reduced competition between the two species. This in turn allowed humans and incipient dogs to live in symbiotic harmony, paving the way for the ongoing evolution of both species.

The researchers have “introduced a really interesting hypothesis that seeks to address the long-debated mechanism by which early dog domestication occurred,” James Cole, an archaeologist at the University of Brighton who’s not involved with the new study, wrote in an email. “The idea is that human populations and wolves could have lived alongside each other during the harsh climatic conditions [of the last ice age] because human populations would have produced enough protein, through hunting activities, to keep both populations fed during the harsh winter months.”

Seems hard to believe, but humans likely had more food during ice age winters than they could handle. This is due to our inability to subsist exclusively on lean protein for months at a time—something wolves have no issues with. For humans, excessive consumption of protein can lead to hyperinsulinemia (insulin resistance), hyperammonia (excess ammonia in blood), diarrhea, and in some extreme cases even death, according to the authors. To overcome this biological limitation, Pleistocene hunters adapted their diets during the winter months, targeting animal parts rich in fat, grease, and oils, such as lower limbs, organs, and the brain. And in fact, “there is evidence for such processing behavior during the Upper Palaeolithic,” according to the paper.

Consequently, wolves and humans were able to “share their game without competition in cold environments,” said Lahtinen. This in turn made it possible for humans to keep wolves as pets.

“Therefore, in the short term over the critical winter months, wolves and humans would not have been in competition over resources and may have mutually benefited from each other’s companionship,” wrote the authors. “This would have been critical in keeping the first proto-dogs for years and generations.”


A 7-week-old Mexican gray wolf puppy. Image: Jeff Roberson (AP)


It’s very possible, said Lahtinen, that the earliest dogs were wolf pups. Hunter-gatherers, she said, “do take pets in most cultures, and humans tend to find young animals cute,” so it would “not be a surprise if this would have happened.”

So dogs exist because wolf pups were cute and we had plenty of leftovers? Seems a legit theory, if you ask me.

Only later, due to traits introduced by artificial selection, were dogs used for hunting, guarding, pulling sleds, and so on, according to the researchers. This theory may also explain the complexity of early dog domestication, which appears to have occurred in Eurasia at multiple times, with dogs continuing to interbreed with wild wolves. The new theory may also explain why the domestication of dogs appears to have occurred in arctic and subarctic regions.

As for the summer months, that wasn’t as crucial for humans, given the relative abundance of food alternatives. During the critical winter months, however, “hunter-gatherers tend to give up their pets if there is a need to give up resources from humans,” said Lahtinen.

Importantly, Lahtinen and her colleagues did not pull this theory from thin air. To reach this conclusion, the team performed energy content calculations to estimate the amount of energy that would be left over from prey animals also hunted by wolves, such as deer, moose, and horses. The authors reasoned that, if humans and wolves were having to compete for these resources, there would be little to no cooperation between the two species. But their calculations showed that, aside from animals like weasels, all animals preyed upon by humans would have provided more lean protein than required.

Ancient Humans Didn't Turn to Cannibalism For the Calories


Humans have been eating other humans since the beginning of time, but the motivations behind this…Read more

“Therefore, the early domesticated wolves could have survived living alongside human populations by consuming the excess protein from hunting that humans could not,” explained Cole. “By having enough food for both populations, the competitive niche between the species is eliminated, thereby paving the way to domestication and the benefits of such a relationship to the two species.”

Cole described it as a “really intriguing hypothesis” because it provides a “mechanism that can explain the domestication of the wolf across a wide geographic and temporal range,” and it does so by “explaining how two carnivorous species could overcome the competition...under harsh climatic conditions.” Looking ahead, Cole said a similar approach would be useful for studying the interactions of humans and other species on this planet over time.

As a relevant aside, Cole is the author of a fascinating Scientific Reports paper published in 2017 arguing that ancient humans didn’t turn to cannibalism for nutrition. Using an approach similar to the one taken in the Lahtinen paper, Cole showed that human flesh simply doesn’t pack the same amount of calories as wild animals, and cannibalism wouldn’t have been worth all the trouble.




Mars' weird geology is making Perseverance's job more complicated

Perseverance took a rock sample that crumbled to dust, confounding scientists. Geologists now know what happened


By NICOLE KARLIS
PUBLISHED AUGUST 26, 2021 7:00PM (EDT)
Perseverance's Selfie with Ingenuity (NASA/JPL-Caltech/MSSS)

Earlier this month, the Perseverance rover set out to collect some rock samples on Mars. It was supposed to be a key moment in the rover's historic sample-return mission, one in which Perseverance was to collect, store and return Martian rock and soil samples to Earth. (The rocket that will pick up the samples hasn't launched yet, and may not for almost a decade; currently, Perseverance is doing the grunt work of collection.) To date, Perseverance had been highly successful: its risky landing worked perfectly, and Ingenuity, the 4-pound helicopter that hitched a ride to Mars on Perseverance's back, overcame massive barriers to become the first powered-controlled flight on another planet. Compared to those feats, Perseverance's next task — drilling out a finger-sized hole in a rock — seemed simple. But after the drilling, the collection tube came back empty. Mission control was in disbelief.

As Salon previously reported, scientists rushed to figure out why the sample went missing. Did the drill somehow miss? It didn't seem so — images from the Red Planet revealed there was a hole in the rock.
Advertisement:

So what happened once the drill came out of the rock?

After some sleuthing, NASA's Perseverance team determined that the rock most likely crumbled into "small fragments" — essentially, a powder. While the pulverization of the rock sample was disappointing to the team, it was also a lesson in Martian geology.

"It's certainly not the first time Mars has surprised us," said Kiersten Siebach, an assistant professor of planetary biology at Rice University and participating scientist on the science and operations team for Perseverance. "A big part of exploration is figuring out what tools to use and how to approach the rocks on Mars."

Siebach explained that something similar sometimes happens to geologists here on Earth. Certain rocks look solid, their appearance having been retained by their chemistry. But weathering events and erosion can weaken that chemistry.

"If you've hiked in California, sometimes it looks like you're hiking next to a rock. But if you kick it, it falls apart into dust," Siebach said. "It's probably something like that, where there's been more weather than anticipated."

Mars is a curious place, geologically speaking. The surface of the planet is rocky, dusty; and thanks to previous missions like the Sojourner rover, Spirit, Opportunity and Curiosity, we know that the soil is toxic. High concentrations of perchlorate compounds, meaning containing chlorine, have been detected and confirmed on multiple occasions. In some spots, there are volcanic basaltic rocks like the kind that we have on Earth in Iceland, Hawaii or Idaho.

Raymond Arvidson, professor of earth and planetary sciences at Washington University in St. Louis and a Curiosity science team member, explained that one big difference between Earth and Mars though is that Earth has active plate tectonics — meaning that Earth's surface is comprised of vast, continent-spanning "plates" that move and shift and abut against each other, creating valleys and mountains. Such geology has given Earth places like Sierra Nevada mountain range. Mars, however, never had plate tectonics.

"So those very primitive rocks that are called the basaltic, like we have in the oceans — that's the dominant mineralogy and composition of rocks on Mars," Arvidson said. "It's basically a basalted planet — not as complicated as here, not as many rocks." Jezero Crater, a 28 mile-wide impact crater and former lake located north of the Martian equator, is where Perseverance touched town. Arvidson noted that the crater has diverse geology: "It has clays, it has faults and carbonate, many of them produced [around] three and a half billion years ago."

For that reason, scientists believe Jezero may be an ideal spot to search for ancient signs of microbial life on Mars. Perseverance is now headed to the next sampling location in South Seitah, which is within Jezero Crater.

Notably, the tubes and instruments on Perseverance were built to collect more solid samples, and that's because the aim of this mission is to see if these rocks contain evidence of microbes, or any ancient fossilized life.

"Do these rocks contain evidence for life?" Arvidson asked. "To answer those questions, you need to get the rock back to Earth."

Arvidson said that these soft sedimentary rocks that turn into powder when you drill are "everywhere" on Mars. Previous rovers encountered them too.

"For example with Curiosity, which landed in Gale Crater in 2012 — and we'd been driving up the side of the mountain called Mount Sharp — we encountered soft sedimentary rocks that were easy to drill, and we'd get powders back," Arvidson said. "Then we found really hard rock that we couldn't drill into, so we gave up. Jezero is going to have hard rocks and soft rocks."

As Siebach previously mentioned, what happened with Perseverance is a learning experience. Scientists, Siebach said, rely on a basaltic signal from orbit to determine the mineralogy and composition of Jezero Crater's floor.

"It's a little bit ambiguous. . . we don't see a strong signal of hydration or something in these rocks in particular, instead, they look like most rocks on Mars which means they have a lot of these volcanic minerals and some dust on top," Siebach noted. However, orbital surveillance is not foolproof. "We don't know whether this crater floor was actually volcanic," Siebach added.

Hence, scientists won't always be certain about the consistency of the sample areas they choose to drill. But once on Mars, it's a mix of science, educated guessing, and luck to really find what they're looking for to bring back home.

"Some of these rocks could have a composition that makes it look igneous, when they could be sedimentary or igneous rocks," Siebach said. "That's the kinds of compositions we're seeing that makes it challenging and fun."

Siebach emphasized she has confidence that Perseverance will have success in sampling some of the other rocks.

"Those surprises and those unexpected events are what drives our curiosity and asking more questions, and learning more about this history of Mars that is written in these rocks," Siebach said. "If the sampling doesn't go as we expect, those surprises are inherent to discovery, and will drive us to learn more."

But the truly exciting science will happen when the samples get back to Earth eventually.

"We will be able to learn so much about Mars from those samples," Siebach said.

SOUTH AFRICA

Activists plan court action against government’s new coal-fired power plants after report finds there’s ‘no such thing as clean coal’


This article was edited post-publication to remove specific information regarding the strategy of the litigation.

GroundWork, Vukani Environmental Movement and the African Climate Alliance, represented by the Centre for Environmental Rights (CER) are preparing a court challenge to the government’s plans to procure electricity from new coal-fired power plants over the next 10 years.

These plans are set out in the government’s 2019 Integrated Resource Plan for Electricity (IRP) and the Minister of Energy’s determination for 1,500MW of new coal-fired generation capacity.

The court challenge will focus on the protection of people’s rights in the South African Constitution, showing that the use of fossil fuels for power generation is harmful and violates many human rights, including the right to a healthy environment.

The applicants will seek relief from the Minister of Mineral Resources and Energy, the National Energy Regulator of South Africa, the Minister of Forestry, Fisheries and Environment, the National Air Quality Officer and the President of the Republic of South Africa (the Respondents), cited in their official capacities.

An expert report on air pollution by Dr Ranajit Sahu, will form part of the litigation. Sahu, who confirmed that South Africa’s proposed 1,500MW of new coal-powered electricity generation will cause significant air pollution and greenhouse gas emissions, even if the cleanest technology currently available is used.

In 2019, the South African government proposed adding 1,500MW of new coal generation in the country, as part of the Integrated Resource Plan for Electricity (IRP). The IRP claims that such coal generation will be cleaner because high-efficiency, low-emission (HELE) generation technology will be used, although it does not state which kind.

In the report commissioned by the CER for the activist groups, Sahu — an engineer with more than three decades of experience in power plant design — assessed the potential air emissions of the most likely types of HELE technology that could be used.

He found that even in the best-case scenario, in which the cleanest available technology is used, large quantities of greenhouse gas emissions are unavoidable.

Sahu considered two likely technologies that could be used: pulverised coal units and circulating fluidised bed technology. He found that pulverised coal units — even when operating at ultra-supercritical efficiency — will not be able to capture their emitted carbon dioxide due to extremely high costs.

In the case of circulating fluidised bed technology, which is considered preferable by the IRP due to its ability to handle low-quality coal, Sahu found that this technology emits from two to 10 times more nitrous oxide than pulverised coal technologies. Nitrous oxide is a potent, long-lasting greenhouse gas with a global warming potential 300 times that of carbon dioxide.

“I want to stress that contrary to implications in the 2019 IRP and the ministerial determination, there is simply no such thing as ‘clean coal’, regardless of whether HELE technologies are used to minimise air emissions from coal (or gas derived from coal),” Sahu said.

The report is the latest piece of research that supports the view that new coal generation in South Africa will be unnecessary, costly and highly detrimental to the environment. It follows previous investigations into the coal cycle (mining, production, supply and disposal) which prove that “clean coal” is an impossibility.

“New coal generation flies in the face of the South African government’s obligation under international and South African law, including the South African Constitution, to take all reasonable measures to protect its people from the impacts of climate change,” said Sahu.

He found that it is unreasonable to expect 750MW (or any amount) of new coal generation could come online by 2023. It takes much longer than four years to achieve generation starting from scratch, especially with the many unknowns relating to HELE technology selection, design, procurement and implementation.

The integrated gasification combined cycle and underground gasification combined cycle power plants, and carbon capture (CC) technologies are unproven and cost-prohibitive at scale, and extremely unlikely to be implemented for the 1,500MW of new coal proposed under the 2019 IRP.

“The worldwide progress of carbon capture technology has been sluggish, at best. Per the Global CCS Institute, there are currently 23 CC projects in construction or operation around the world. But a review of the website listing the projects shows that not one is located at a coal-fired power plant of commercial scale.

“While the CO2 emissions intensity for coal plants is reduced somewhat as a result of increasing the efficiency of the thermal cycle, major reductions in CO2 intensity can only be achieved by way of carbon capture.

“Based on the track record of carbon capture to date globally, it is my opinion that there is simply no pathway to economically utilise carbon capture in South Africa now or in the foreseeable future for reducing CO2 emissions from new coal generation,” said Sahu. DM/OBP

Absa OBP

This article first appeared on Daily Maverick and is republished here under a Creative Commons license.