Thursday, December 19, 2019

Fossil expands ancient fish family tree


Fossil expands ancient fish family tree
Artist's impression of the newly described lungfish Isityumzi eating bivalves in a Late Devonian freshwater environment. Credit: Maggie Newman
A second ancient lungfish has been discovered in Africa, adding another piece to the jigsaw of evolving aquatic life forms more than 400 million years ago.
The new fossil  genus Isityumzi mlomomde was found about 10,000km from a previous species described in Morocco, and is of interest because it existed in a  (70 degrees south) or polar environment at the time.
Flinders University researcher Dr. Alice Clement says the "scrappy" fossil remains including tooth plates and scales were found in the Famennian Witpoort Formation off the western cape of South Africa.
"This lungfish material is significant for a number of reasons," Dr. Clement says.
"Firstly it represents the only Late Devonian lungfish known from Western Gondwana (when South America and Africa were one continent). During this period, about 372-359 million years ago, South Africa was situated next to the South Pole," she says.
"Secondly, the new taxa from the Waterloo Farm Formation seems to have lived in a thriving ecosystem, indicating this region was not as cold as the polar regions of today."
Dr. Clement says the animal would still have been subject to long periods of winter darkness, very different to the  that lungfish live in today when there are only six known species of lungfish living only in Africa, South America and Australia.
Isityumzi mlomomde means "a long-mouthed device for crushing" in isiXhosa, one of the official languages of South Africa.
Around 100 kinds of primitive lungfish (Dipnoi) evolved from the early Devonian period more than 410 million years ago. More than 25 originated in Australian (Gondwanan) and others are known to have lived in temperate tropical and subtropical waters of China and Morocco in the Northern Hemisphere.
Lungfish are a group of fish most closely related to all tetrapods—all  including amphibians, reptiles, birds and mammals.
"In this way, a lungfish is more closely related to humans than it is to a goldfish!" says Dr. Clement, who has been involved in naming three other new ancient lungfish.
The paper, "A high latitude Devonian lungfish, from the Famennian of South Africa" (2019) by RW Gess and AM Clement has been published in PeerJ.

Lungfishes are not airheads

Lungfishes are not airheads
Not exactly the great-great-grandma you were expecting for Thanksgiving Dinner. Credit: Sarah Gibson
It's November, a month to ruminate on all of the things we are thankful for while we ruminate copious amounts of food (at least in the United States). I've been contemplating all of the things that I am thankful for, besides the usual suspects (you know, friends, family, a pretty cool research project, and, of course, the PLOS Paleo Community!).
You know what else I am thankful for? I'm thankful for lungfishes.
Lungfishes are pretty spectacular organisms, and also utterly bizarre. In fact, our knowledge of extant lungfishes, their biology, and their evolutionary relationships to other fishes or tetrapods was confusing at first. The South American lungfish, Lepidosiren paradoxa, got its specific name due to its mosaic of fish and tetrapod characteristics, and was thought to have been a reptile when it was described in 1836. The West African Lungfish, Protopterus annectans, was thought to be an amphibian when it was described in 1837. These critters confused a lot of taxonomists for a lot of years, but eventually it was realized that they belonged within Dipnoi, the lungfishes, a group within Sarcopterygii (a group that includes coelacanths and, well, ourselves!). Now, almost all morphological and molecular phylogenetic studies accept that lungfishes are more closely related to tetrapods than coelacanths are to tetrapods.
Lungfishes have a massive evolutionary history, with their peak diversity of around 100 species occurring around 359–420 million years ago during the Devonian Period. Nowadays, their family get-togethers are a little smaller, with just six living species occurring in South America and Africa (Lepidosiren and Protopterus, the Lepidosirenidae), and Australia (Neoceratodus, a single species belonging to Neoceratodontidae). These two groups are thought to have diverged sometime during the Permian (~277 Ma), and when you've been away from your relatives for that long, it can be expected that you'll become quite different. While both have thick bodies with broad tails and distinguishing toothplates used for crushing prey, notable external differences include the filamentous "noodle" pectoral and pelvic fins of the Lepidosirenidae compared to the thicker, paddle-like fins of Neoceratodus.
Lungfishes are not airheads
An African Lungfish with his wimpy noodle arms. Credit: Sarah Gibson
There's a lot we still don't know about the closest-living relatives of all tetrapods. A paper that came out last month in PLOS ONE by Alice M. Clement, Johan Nysjö, Robin Strand, and Per E. Ahlberg set out to study one such aspect of lungfishes: the /cranial endocast relationship.
When lacking soft tissue, as with most fossils, paleontologists use the size of the cranial cavity (the endocast) to elucidate the size of the brain, which obviously can help us infer the relative intelligence or cognition of the organism when comparing the size of the brain to the size of the organism itself. This can be problematic though, depending on what group you are studying. Clement et al. (2015) note that the brain-endocast relationship of  (birds, reptiles, mammals, etc.) is more tightly constrained that what is observed in some fishes. For example, some living chondrichthyans, such as the basking shark Cetorhinus, can have a brain size that occupies only around 6% of the endocranial cavity. Even stranger still is the living coelacanth Latimeria, who's brain occupies a tiny 1% of the endocranial cavity. On the flipside, Clement et al. (2015) notes, ray-finned fishes can have a close match in brain size to endocast size. This variability in brain-to-endocast relationship is unusual, and one that author Clement told me can only be understood by expanding datasets and the taxa for which we know the brain-endocranial relationship, something that she and her colleagues are continuing to work on.
Where do lungfishes fit in this brain-endocast relationship spectrum? Clement et al. (2015) used specimens of the Australian lungfish Neoceratodus fosteri to examine this relationship. Using high-resolution X-ray Computed Tomography (CT) scanning techniques and computer analyses outlined in detail in the paper, Clement and colleagues examined in detail the size, anatomy, and morphology of the brain of Neoceratodus.
They concluded that brain fits the endocast pretty closely, particularly in the forebrain and labyrinth (inner ear) regions. The paper diagrams beautifully the relationship of brain-to-endocast spatial relationship.
Lungfishes are not airheads
X-ray microtomographic images of iodine-treated Neoceratodus forsteri (ANU 73578).A-F in transverse view moving posteriorly; G, 3D rendering of whole specimen in left lateral view; and H, diagram showing position of slices A-F.
A PLOS ONE paper from last year by two of the authors here (Clement and Ahlberg, 2014) examined the endocast of a fossil lungfish Rhinodipterus from the Devonian Gogo Formation of Australia, and found similarity between it and the brain of Neoceratodus. Some general inferences about the functional significance of different sections of the brain can be made. Clement and Ahlberg (2014) note that the enlarging of the telencephalic region of lungfishes over time (between Devonian Rhinodipterus and the extant Neoceratodus) is probably related to increased reliance upon this part of the brain.
"The forebrain is associated with olfaction; perhaps as lungfishes moved from open marine environments in the Devonian to murkier, freshwater, swamp-like environments (like we see them in today), their reliance on smell increased," Clement told me. She continues, "Similarly, the midbrain (where the optic lobes are) is greatly reduced in lungfishes, suggesting that they don't rely on sight very much, compared to most actinopterygian fishes."
Lungfishes are not airheads
Brain-endocast spatial relationship in Neoceratodus, left lateral view. A, brain; B, endocast; C, overlay; D, distance map; and E, distance map. Warmest colors indicate greatest distance.
The work by Clement and colleagues has implications beyond lungfish anatomy. Clement et al. (2015) clearly demonstrates the care that paleontologists, specifically paleoneurologists, should use when studying the cranial endocasts of fossil taxa. Clement notes, "I think we must always use caution when interpreting the endocasts of fossils in terms of gross brain morphology, as we can't know the brain-endocranial relationship in [extinct] taxa. However, the fact remains that no brain region can be larger than the endocranial cavity that housed it, so we are given maximal proportions at least."
Clement further states, "Endocasts themselves are often highly rich in morphological characters (whether related to the brain inside them or not) useful for comparative, and probably also phylogenetic, analyses across taxa. In my opinion, the great advances in scanning technology mean that virtual palaeoneurology is on the cusp of a boom!"

Robot experiment shows people trust robots more when robots explain what they are doing




Robot experiment shows people trust robots more when robots explain what they are doing
The Baxter robot reaches for a pill bottle. Credit: Edmonds et al., Sci. Robot. 4, eaay4663 (2019)
A team of researchers from the University of California Los Angeles and the California Institute of Technology has found via experimentation that humans tend to trust robots more when they communicate what they are doing. In their paper published in the journal Science Advances, the group describes programming a robot to report what it was doing in different ways and then showed it in action to volunteers.

As robots become more advanced, they are expected to become more common—we may wind up interacting with one or more of them on a daily basis. Such a scenario makes some people uneasy—the thought of interacting or working with a machine that not only carries out specific assignments, but does so in seemingly intelligent ways might seem a little off-putting. Scientists have suggested one way to reduce the anxiety people experience when working with robots is to have the robots explain what they are doing.
In this new effort, the researchers noted that most work being done with robots is focused on getting a task done—little is being done to promote harmony between robots and humans. To find out if having a robot simply explain what it is doing might reduce anxiety in humans, the researchers taught a robot to unscrew a medicine cap—and to explain what it was doing as it carried out its task in three different ways.
The first type of  was called symbolic, or mechanistic, and it involved having the robot display on a  each of the actions it was performing as part of a string of actions, e.g. grasp, push or twist. The second type of explanation was called haptic, or functional. It involved displaying the general function that was being performed as a robot went about each step in a task, such as approaching, pushing, twisting or pulling. Volunteers who watched the robot in action were also shown a simple text message that described what the robot was going to do.



Robot experiment shows people trust robots more when robots explain what they are doing
Explanations generated by the symbolic planner (above panels) and the haptic model (bottom panels). Credit: Edmonds et al., Sci. Robot. 4, eaay4663 (2019)
The researchers then asked 150 volunteers to watch as the robot opened a medicine bottle along with accompanying explanations. The researchers report that the volunteers gave the highest trust ratings when they were shown both the haptic and symbolic explanations. The lowest ratings came from those who saw just the text message. The researchers suggest their experiment showed that humans are more likely to trust a robot if they are given enough information about what the  is doing. They say the next step is to teach robots to report why they are performing an action.




00:00
00:40

Video of combined symbolic and haptic explanation by the robot. Credit: Edmonds et al., Sci. Robot. 4, eaay4663 (2019)

00:00
00:40

Video of haptic explanation. Credit: Edmonds et al., Sci. Robot. 4, eaay4663 (2019)

00:06
00:48

Video of symbolic explanation. Credit: Edmonds et al., Sci. Robot. 4, eaay4663 (2019)

00:00
00:40

Video of summary text-only explanation. Credit: Edmonds et al., Sci. Robot. 4, eaay4663 (2019)

AI's future potential hinges on consensus: NAM report

health care
Credit: CC0 Public Domain
The role of artificial intelligence, or machine learning, will be pivotal as the industry wrestles with a gargantuan amount of data that could improve—or muddle—health and cost priorities, according to a National Academy of Medicine Special Publication on the use of AI in health care.
Yet, the current explosion of investment and development is happening without an underpinning of consensus of responsible, transparent deployment, which potentially constrains its potential.
The new report is designed to be a comprehensive reference for organizational leaders, , data analysts, model developers and those who are working to integrate machine learning into  care, said Vanderbilt University Medical Center's Michael Matheny, MD, MS, MPH, Associate Professor in the Department of Biomedical Informatics, and co-editor of AI in Healthcare: The Hope, The Hype, The Promise, The Peril.
"It's critical for the health care community to learn from both the successes, but also the challenges and recent failures in use of these tools. We set out to catalog important examples in health care AI, highlight best practices around AI development and implementation, and offer key points that need to be discussed for consensus to be achieved on how to address them as an AI community and society," said Matheny.
Matheny underscores the applications in health care look nothing like the mass market imagery of self-driving cars that is often synonymous with  or tech-driven systems.
For the immediate future in health care, AI should be thought of as a tool to support and complement the decision-making of highly trained professionals in delivering care in collaboration with patients and their goals, Matheny said.
Recent advances in deep learning and related technologies have met with great success in imaging interpretations, such as radiology and retina exams, which have spurred a rush toward AI development that brought first, venture capital funding, and then industry giants. However, some of the tools have had problems with bias from the populations they were developed from, or from the choice of an inappropriate target. Data analysts and developers need to work toward increased data access and standardization as well as thoughtful development so algorithms aren't biased against already marginalized patients.
The editors hope this report can contribute to the dialog of patient inclusivity and fairness in the use of AI tools, and the need for careful development, implementation, and surveillance of them to optimize their chance of success, Matheny said.
Matheny along with Stanford University School of Medicine's Sonoo Thadaney Israni, MBA, and Mathematica Policy Research's Danielle Whicher, Ph.D., MS, penned an accompanying piece for JAMA Network about the watershed moment in which the industry finds itself.
"AI has the potential to revolutionize . However, as we move into a future supported by technology together, we must ensure high data quality standards, that equity and inclusivity are always prioritized, that transparency is use-case-specific, that new technologies are supported by appropriate and adequate education and training, and that all technologies are appropriately regulated and supported by specific and tailored legislation," the National Academy of Medicine wrote in a release.
"I want people to use this report as a foil to hone the national discourse on a few key areas including education, equity in AI, uses that support human cognition rather than replacing it, and separating out AI transparency into data, algorithmic, and performance transparency," said Matheny.
More information: National Academy of Medicine Special Publication: nam.edu/artificial-intelligenc … special-publication/
Michael E. Matheny et al. Artificial Intelligence in Health Care, JAMA (2019). DOI: 10.1001/jama.2019.21579

Smelly, poisonous molecule may be a sure-fire sign of extraterrestrial life

Smelly, poisonous molecule may be a sure-fire sign of extraterrestrial life
Phosphine, a molecule known on Earth for its smelly and toxic nature, may be a sure sign of alien life if detected in nearby exoplanets. Credit: NASA, edited by MIT News
Phosphine is among the stinkiest, most toxic gases on Earth, found in some of the foulest of places, including penguin dung heaps, the depths of swamps and bogs, and even in the bowels of some badgers and fish. This putrid "swamp gas" is also highly flammable and reactive with particles in our atmosphere.
Most life on Earth, specifically all aerobic, oxygen-breathing life, wants nothing to do with phosphine, neither producing it nor relying on it for survival.
Now MIT researchers have found that phosphine is produced by another, less abundant life form: anaerobic organisms, such as bacteria and microbes, that don't require oxygen to thrive. The team found that phosphine cannot be produced in any other way except by these extreme, oxygen-averse organisms, making phosphine a pure biosignature—a sign of life (at least of a certain kind).
In a paper recently published in the journal Astrobiology, the researchers report that if phosphine were produced in quantities similar to methane on Earth, the gas would generate a signature pattern of light in a planet's atmosphere. This pattern would be clear enough to detect from as far as 16 light years away by a telescope such as the planned James Webb Space Telescope. If phosphine is detected from a rocky planet, it would be an unmistakable sign of extraterrestrial life.
"Here on Earth, oxygen is a really impressive sign of life," says lead author Clara Sousa-Silva, a research scientist in MIT's Department of Earth, Atmospheric and Planetary Sciences. "But other things besides life make oxygen too. It's important to consider stranger  that might not be made as often, but if you do find them on another planet, there's only one explanation."
The paper's co-authors include Sukrit Ranjan, Janusz Petkowski, Zhuchang Zhan, William Bains, and Sara Seager, the Class of 1941 Professor of Earth, Atmospheric, and Planetary Sciences at MIT, as well as Renyu Hu at Caltech.
Giant bellies
Sousa-Silva and her colleagues are assembling a database of fingerprints for molecules that could be potential biosignatures. The team has amassed more than 16,000 candidates, including phosphine. The vast majority of these molecules have yet to be fully characterized, and if scientists were to spot any of them in an exoplanet's atmosphere, they still wouldn't know whether the molecules were a sign of life or something else.
But with Sousa-Silva's new paper, scientists can be confident in the interpretation of at least one molecule: phosphine. The paper's main conclusion is that, if phosphine is detected in a nearby, rocky planet, that planet must be harboring life of some kind.
The researchers did not come to this conclusion lightly. For the last 10 years, Sousa-Silva has devoted her work to fully characterizing the foul, poisonous gas, first by methodically deciphering phosphine's properties and how it is chemically distinct from other molecules.
In the 1970s, phosphine was discovered in the atmospheres of Jupiter and Saturn—immensely hot gas giants. Scientists surmised that the molecule was spontaneously thrown together within the bellies of these gas giants and, as Sousa-Silva describes, "violently dredged up by huge, planet-sized convective storms."
Still, not much was known about phosphine, and Sousa-Silva devoted her graduate work at University College of London to pinning down phosphine's spectral fingerprint. From her thesis work, she nailed down the exact wavelengths of light that phosphine should absorb, and that would be missing from any atmospheric data if the gas were present.
During her Ph.D., she began to wonder: Could phosphine be produced not just in the extreme environments of gas giants, but also by life on Earth? At MIT, Sousa-Silva and her colleagues began answering this question.
"So we started collecting every single mention of phosphine being detected anywhere on Earth, and it turns out that anywhere where there's no oxygen has phosphine, like swamps and marshlands and lake sediments and the farts and intestines of everything," Sousa-Silva says. "Suddenly this all made sense: It's a really toxic molecule for anything that likes oxygen. But for life that doesn't like oxygen, it seems to be a very useful molecule."
"Nothing else but life"
The realization that phosphine is associated with anaerobic life was a clue that the molecule could be a viable biosignature. But to be sure, the group had to rule out any possibility that phosphine could be produced by anything other than life. To do this, they spent the last several years running many species of phosphorous, phosphine's essential building block, through an exhaustive, theoretical analysis of chemical pathways, under increasingly extreme scenarios, to see whether phosphorous could turn into phosphine in any abiotic (meaning non-life-generating) way.
Phosphine is a molecule made from one phosphorous and three hydrogen atoms, which normally do not prefer to come together. It takes enormous amounts of energy, such as in the extreme environments within Jupiter and Saturn, to smash the atoms with enough force to overcome their natural aversion. The researchers worked out the chemical pathways and thermodynamics involved in multiple scenarios on Earth to see if they could produce enough energy to turn phosphorous into phosphine.
"At some point we were looking at increasingly less-plausible mechanisms, like if tectonic plates were rubbing against each other, could you get a plasma spark that generated phosphine? Or if lightning hit somewhere that had phosphorous, or a meteor had a phosphorous content, could it generate an impact to make phosphine? And we went through several years of this process to figure out that nothing else but life makes detectable amounts of phosphine."
Phosphine, they found, has no significant false positives, meaning any detection of phosphine is a sure sign of life. The researchers then explored whether the molecule could be detectable in an exoplanet's atmosphere. They simulated the atmospheres of idealized, oxygen-poor, terrestrial exoplanets of two types: hydrogen-rich and carbon dioxide-rich atmospheres. They fed into the simulation different rates of phosphine production and extrapolated what a given atmosphere's spectrum of light would look like given a certain rate of phosphine production.
They found that if phosphine were produced at relatively small amounts equivalent to the amount of methane produced on Earth today, it would produce a signal in the atmosphere that would be clear enough to be detected by an advanced observatory such as the upcoming James Webb Space Telescope, if that planet were within 5 parsecs, or about 16  from Earth—a sphere of space that covers a multitude of stars, likely hosting rocky planets.
Sousa-Silva says that, aside from establishing  as a viable biosignature in the search for extraterrestrial life, the group's results provide a pipeline, or process for researchers to follow in characterizing any other of the other 16,000 biosignature candidates.
"I think the community needs to invest in filtering these candidates down into some kind of priority," she says. "Even if some of these molecules are really dim beacons, if we can determine that only life can send out that signal, then I feel like that is a goldmine."

Addressing committed emissions in both US and China requires carbon capture and storage

THE REALITY IS THAT CCS IS NOT GREEN NOR CLEAN IT IS GOING TO BE USED TO FRACK OLD DRY WELLS SUCH AS IN THE BAKAN SHIELD IN SASKATCHEWAN
https://plawiuk.blogspot.com/2014/10/the-myth-of-carbon-capture-and-storage.html

ALSO SEE https://plawiuk.blogspot.com/search?q=CCS

emissions
Credit: CC0 Public Domain
Stabilizing global temperatures will require deep reductions in carbon dioxide (CO2) emissions worldwide. Recent integrated assessments of global climate change show that CO2 emissions must approach net-zero by mid-century to avoid exceeding the 1.5°C climate target. However, "committed emissions," those emissions projected from existing fossil fuel infrastructure operating as they have historically, already threaten that 1.5°C goal. With the average lifespan of a coal plant being over 40 years, proposed or under-construction power plants only add to that burden, further increasing the challenge of achieving net-zero emissions by 2050.
The deep decarbonization required for net-zero emissions will require existing and proposed fossil-energy infrastructure to follow one of two pathways: either prematurely retiring or capturing and storing their emissions, thus preventing their release into the atmosphere. Carbon capture and storage (CCS), represents the only major viable path for fossil-fuel  to be net-zero, short of being shuttered.
In a Viewpoint Article recently published in Environmental Science & Technology, Haibo Zhai outlines how the U.S. and China, the world's two largest emitters, should address their committed emissions. "In both countries, CCS retrofits to existing infrastructure are essential for reducing emissions to net-zero," said Zhai, an Associate Research Professor of Engineering and Public Policy at Carnegie Mellon University. However, differences in the power-plant fleets and the energy mix in the two countries point to separate routes for achieving deep decarbonization.
In the U.S., the energy landscape has changed dramatically over the past two decades. Coal was the dominant source of electricity (51% of total power generation in 2000) for most of the twentieth century, but has recently been displaced by cheap and abundant natural gas as well as growth in renewables. Coal accounted for just 27% of U.S. power generation in 2019. Coal's decline is expected to continue in favor of cheaper alternatives, due to the U.S."s relatively old (40 years) and inefficient (32% efficiency) fleet of coal-fired plants.
Zhai does not see CCS retrofits to U.S coal plants as a fleet-wide approach to decarbonization, though there is potential for partial capture at the most efficient plants. CCS development should instead be focused on combined-cycle natural gas plant retrofits. "Natural gas has helped reduce the carbon intensity of the U.S. power sector, but this wave of new gas plants still represents a significant amount of committed emissions," said Zhai.
China is the opposite of the U.S. in terms of its energy mix and its fossil energy infrastructure. Coal supplies almost 65% of the nation's electricity. Coal-fired plants in China have a median age of only 12 years and much higher efficiencies (often greater than 40%) compared to the U.S. "Such a young fleet is unlikely to be phased out anytime soon," said Zhai. "Any path for China to achieve deep decarbonization must include CCS retrofits to its recently-built coal plants."
Despite the necessity of CCS, the technology has not been proven on a large scale and remains very costly. Only two commercial-scale CCS projects currently operate in the world: Petra Nova in the U.S. and Boundary Dam in Canada. Current CCS technologies have high energy and  associated with separating CO2 out of process waste streams.
CCS, according to Zhai, currently sits on the steep part of the "learning curve." With any technology first-of-a-kind deployments are always expensive. However, industry-wide learning—through technology developments such as improved separation materials and processes, supply chain expansion, and increases in operational efficiency—makes latter deployments cheaper. Moving down the learning curve represents a kind of chicken and egg dilemma for CCS: to be widely deployed, it needs to be cheap. And for CCS to be cheap, it needs to have been deployed.
Therefore, there is a strong reason that governments should incentivize early adoption of CCS through regulatory, economic, and policy means, argues Zhai. He points to case studies of other low-carbon technologies, like photovoltaic solar panels, that have become cost-competitive after incentives helped lower high initial costs. Because early deployment is required to make future deployment economical, the time to act is now, he says.
"If you accept the premise that committed emissions are a problem, there is no choice other than CCS," he said. "And incentives are required to kick-start deployment of CCS on the scale needed to address the issue."
In the U.S., Zhai points to incentives for retrofitting natural gas-fired plants with CCS, expecting market forces to address the committed emissions from coal-fired plants, as the aging coal fleet continues to phase out. Zhai's article points to a tax credit for carbon sequestration in the U.S. as a major policy lever to incentivize these CCS efforts.
In China, on the other hand, CCS development for coal plant retrofits should be the major focus. There, Zhai notes that the national emissions trading system, where emitters can buy or sell CO2 emissions credits, will be the major policy lever that can spur development of mitigation technologies. In both cases, the current high costs of CCS point to government policies as a key step in overcoming the expensive initial phases of deployment.
Major co-benefits of incentivizing CCS development for existing fossil-fuel infrastructure are the role CCS will likely play in certain negative emissions technologies (NETs) and a decreased dependence on expensive NETs in the future. Bioenergy with CCS (BECCS), for instance, is outlined as the most prominent NET option. However, a key subsystem for any BECCS is CCS. Developing CCS now, argues Zhai, means that BECCS will be poised to help address global climate change in the future.
Haibo Zhai. Deep Reductions of Committed Emissions from Existing Power Infrastructure: Potential Paths in the United States and China, Environmental Science & Technology (2019). DOI: 10.1021/acs.est.9b06858
Journal information: Environmental Science & Technology 

Mealworms provide plastic solution

plastic
Credit: CC0 Public Domain
Tiny mealworms may hold part of the solution to our giant plastics problem. Not only are they able to consume various forms of plastic, as previous Stanford research has shown, they can eat Styrofoam containing a common toxic chemical additive and still be safely used as protein-rich feedstock for other animals, according to a new Stanford study published in Environmental Science & Technology.
The study is the first to look at where chemicals in plastic end up after being broken down in a natural system—a yellow mealworm's gut, in this case. It serves as a proof of concept for deriving value from plastic waste.
"This is definitely not what we expected to see," said study lead author Anja Malawi Brandon, a Ph.D. candidate in civil and  at Stanford. "It's amazing that mealworms can eat a chemical additive without it building up in their body over time."
In earlier work, Stanford researchers and collaborators at other institutions revealed that mealworms, which are easy to cultivate and widely used as a food for animals ranging from chickens and snakes to fish and shrimp, can subsist on a diet of various types of plastic. They found that microorganisms in the worms' guts biodegrade the plastic in the process—a surprising and hopeful finding. However, concern remained about whether it was safe to use the plastic-eating mealworms as feed for other animals given the possibility that harmful chemicals in plastic additives might accumulate in the worms over time.
"This work provides an answer to many people who asked us whether it is safe to feed animals with mealworms that ate Styrofoam", said Wei-Min Wu, a senior research engineer in Stanford's Department of Civil and Environmental Engineering who has led or co-authored most of the Stanford studies of plastic-eating mealworms.
Styrofoam solution
Brandon, Wu and their colleagues looked at Styrofoam or polystyrene, a common plastic typically used for packaging and insulation, that is costly to recycle because of its low density and bulkiness. It contained a flame retardant called hexabromocyclododecane, or HBCD, that is commonly added to polystyrene. The additive is one of many used to improve plastics' manufacturing properties or decrease flammability. In 2015 alone, nearly 25 million metric tons of these chemicals were added to plastics, according to various studies. Some, such as HBCD, can have significant health and environmental impacts, ranging from endocrine disruption to neurotoxicity. Because of this, the European Union plans to ban HBCD, and U.S. Environmental Protection Agency is evaluating its risk.
Mealworms in the experiment excreted about half of the polystyrene they consumed as tiny, partially degraded fragments and the other half as carbon dioxide. With it, they excreted the HBCD—about 90 percent within 24 hours of consumption and essentially all of it after 48 hours. Mealworms fed a steady diet of HBCD-laden polystyrene were as healthy as those eating a normal diet. The same was true of shrimp fed a steady diet of the HBCD-ingesting mealworms and their counterparts on a normal diet. The plastic in the mealworms' guts likely played an important role in concentrating and removing the HBCD.
The researchers acknowledge that mealworm-excreted HBCD still poses a hazard, and that other common plastic additives may have different fates within plastic-degrading mealworms. While hopeful for mealworm-derived solutions to the world's  crisis, they caution that lasting answers will only come in the form of biodegradable  replacement materials and reduced reliance on single-use products.
"This is a wake-up call," said Brandon. "It reminds us that we need to think about what we're adding to our plastics and how we deal with it."

Assessing heat wave risk in cities as global warming continues



Assessing heat wave risk in cities as global warming continues
Urban population dynamic further aggravates health risk under heat waves. Credit: Jiachuan Yang
A trio of researchers with the Hong Kong University of Science and Technology, the University of Alabama in Huntsville and Arizona State University has shown that commuters traveling from the suburbs to cities in the future are going to encounter hotter than expected weather when in they reach their destination. In their paper published in the journal Science Advances, Jiachuan Yang, Leiqiu Hu and Chenghao Wang describe their study of the differences in temperature between actual readings and those that are forecast in urban areas and what it could mean for commuters as climate change continues.
Cities are hotter than outlying areas in the summer due to the materials that are used to make roads and buildings—asphalt, tar and cement hold a lot of heat. This has led to what scientists call the . In this new effort, the researchers wondered what might happen to people in the future who live in the suburbs and commute into cities during heat waves as the planet becomes warmer. To find out, they accessed databases containing  to learn more about differences between forecast and actual temperatures in cities. They also accessed databases of census information, including commuting patterns.
The researchers found that temperatures were typically hotter in cities during heat waves than were predicted by weather forecasters. They also found that the amount varied. Salt Lake City, for example, had the biggest difference—the  was on average 3.8 degrees C hotter during heat waves than was forecast. Overall, the researchers found that out of the 16 cities they studied, temperatures were on average 1.9 degrees C higher than what forecasters predicted. They note that such temperatures are significant because they represent heat events where temperatures are already on average 3.6 degrees C hotter than normal. They note that mortality (people dying) and morbidity (people getting sick) is already problematic in many cities during heat waves. They further note that the problem is only going to get worse due to climate change. They suggest that commuters of the future may face unexpected hazards as they travel from the relatively cooler suburbs to the much hotter cites. They suggest also that  need to take such scenarios into consideration as they look to contend with climate change.

Assessing heat wave risk in cities as global warming continues
Attribution of exposure temperature under heat waves. Credit: Jiachuan Yang