Saturday, July 23, 2022

USE THE PACIFIC NAVIES

New research shows 2/3 of species in global shark fin trade at risk of extinction

Researchers say coastal sharks could benefit from increased protections, international trade regulations

Peer-Reviewed Publication

MOTE MARINE LABORATORY

Shark fins 

IMAGE: SHARK FINS FOUND IN A MARKET. view more 

CREDIT: STAN SHEA

SARASOTA, FLA (July 19, 2022) — More than 70 percent of species that end up in the global shark fin trade are at risk of extinction — and sharks living closer to our coastlines might be of greatest conservation concern, according to new research.
 
A team of international scientists from the U.S. and China sampled 9,820 fin trimmings from markets in Hong Kong — one of the largest shark fin trade hubs in the world. With a little DNA detective work, they unraveled the mystery of what fin belonged to what species. In total, they found 86 different species of sharks and their relatives the rays and chimeras. Sixty-one of those, more than two-thirds, are threatened with extinction. The research were recently published in Conservation Letters.
 
“Overfishing is most likely the immediate cause of the declining trends we are seeing in shark and ray populations around the world. The fact that we are finding so many species threatened with extinction in the global shark fin trade is a warning sign telling us that international trade might be a main driver of unsustainable fishing,” said Diego Cardeñosa, Florida International University (FIU) postdoctoral researcher and the study’s lead author.
 
The International Union for the Conservation of Nature (IUCN) Red List of Threatened Species assessed sharks and their relatives in 2021 and found about one third of all species were threatened. Results of this new study indicate species in this trade are much more likely to be in threatened categories.
 
For nearly a decade, Dr. Demian Chapman — Director of the Sharks & Rays Conservation Research Program at Mote Marine Laboratory & Aquarium and Adjunct Professor at FIU — has led the collaborative team, that includes Cardeñosa, to track and monitor the global shark fin trade. To date, they’ve conducted DNA testing on about 10,000 small scraps taken from processed imported fins, sold in markets in Hong Kong and South China. The project is in collaboration with BLOOM Association Hong Kong and Kadoorie Farm & Botanic Garden. The team’s goal is to better understand what species are in the trade and how common they are. By tracking this over time, they will be able to inform decision-makers about how well various management measures are working.
 
The study found the common species that end up in the fin trade are open-ocean, or pelagic, sharks, such as blue and silky sharks. However, the greatest number of species in the trade — and many of the most common — live in coastal areas, including blacktip, dusky, spinner, and sandbar sharks. The researchers warn without management many of coastal species could become extinct.
 
“A few nations are protecting or sustainably fishing sharks and their relatives, but the majority are not for a variety of reasons,” said Chapman. “Quite a few of the coastal sharks we found in the trade — such as smalltail, broadfin, whitecheek and various hound shark, river shark and small hammerhead species — are listed as Endangered or Critically Endangered and yet there are no regulations protecting them anywhere in their range. Unless the relevant governments respond with management soon, we are likely to experience a wave of extinctions among coastal sharks and rays.”
 
Three coastal species are already thought to be extinct — all found in nations that did not regulate shark fishing.
 
One way to encourage better species management within nations is to list them under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) — an international agreement seeking to protect animals and plants from overexploitation driven by international trade. The 19th meeting of the Conference of the Parties (CoP19) to CITES will take place in November. This study will provide key evidence for the body’s deliberations by bringing the plight of costal sharks to the attention of governments and showing only a small percentage of the overall trade in shark fins is currently regulated under the Convention.
 
“The upcoming CITES CoP19 Governments have tabled proposals that would bring the vast majority of sharks traded for their fins under the Convention’s sustainability controls, action that has been informed by this study’s findings. We’re encouraged to see CITES Governments match their level of ambition to the level of threat seen for sharks and rays globally, with CITES listings a strong driver for better domestic management of shark fisheries” said Luke Warwick, Director of Shark and Ray Conservation at the Wildlife Conservation Society.
 
If these proposals are adopted, nations would be obliged to ensure that any export of listed species is legal, traceable and sustainable.
 
“There is a rage of management actions that nations can take to get coastal shark fishing under control and avert this extinction crisis,” said Chapman. “From changing fishing gear, to creating protected areas, to limiting catch, the solutions are out there.”
 
“Since our Founding Director, Dr. Eugenie Clark—the Shark Lady—began her work documenting shark populations in southwest Florida and around the world over 65 years ago, the core of Mote has been to push the frontiers of science in support of evidence-based, sustainable use of our shared ocean resources,” said Mote President & CEO Dr. Michael P. Crosby. “We plan to continue this study’s vital work monitoring shark species commonly found in the trade, through the collaboration of innovative science, community-engagement and resource management that together are critical to preventing extinction of these species.”
 
“Our results highlight high levels of international trade and clear management gaps for coastal species. Many are in the highest extinction risk categories. The next category is extinction. We can’t allow this to happen,” Cardeñosa said.
 
This research was supported by the Shark Conservation Fund, the Pew Charitable Trusts, the Pew Fellowship Program, and the Roe Foundation.

About Mote Marine Laboratory & Aquarium:

Mote Marine Laboratory & Aquarium, based in Sarasota, Florida, has conducted marine research since its founding as a small, one-room laboratory in 1955. Since then, Mote has grown to encompass more than 20 research and conservation programs that span the spectrum of marine science: sustainable aquaculture systems designed to alleviate growing pressures on wild fish populations; red tide research that works to inform the public and mitigate the adverse effects of red tide with innovative technologies; marine animal science, conservation and rehabilitation programs dedicated to the protection of animals such as sea turtles, manatees and dolphins; and much more. Mote Aquarium, accredited by the Association of Zoos & Aquariums, is open 365 days per year. Learn more at mote.org or connect with @motemarinelab on Facebook, TikTok, Twitter, Instagram and YouTube.

About Florida International University:

Florida International University is a top public university that drives real talent and innovation in Miami and globally. Very high research (R1) activity and high social mobility come together at FIU to uplift and accelerate learner success in a global city by focusing in the areas of environment, health, innovation, and justice. Today, FIU has two campuses and multiple centers. FIU serves a diverse student body of more than 58,000 and 270,000 Panther alumni. U.S. News and World Report places dozens of FIU programs among the best in the nation, including international business at No. 2. Washington Monthly Magazine ranks FIU among the top 20 public universities contributing to the public good.

Chemists create artificial protein that peers into Earth’s chemical past

Artificial protein reveals new clues about primordial chemical processes

Peer-Reviewed Publication

OHIO STATE UNIVERSITY

COLUMBUS, Ohio – Scientists have developed an artificial protein that could offer new insights into chemical evolution on early Earth. 

All cells need energy to survive, but because the kinds of chemicals available during the planet’s early days were so limited compared to today’s vast scope of chemical diversity, multicellular organisms had a lot less energy to build the complex organic structures that make up the world we know today. 

New research, published in the journal Proceedings of the National Academy of Sciences, provides evidence that many of the organisms borne from Earth’s primordial soup heavily relied on metal molecules, specifically nickel, to help store and expend energy. 

Current theories about how microbial life arose suggest that while cells used carbon dioxide and hydrogen as a fuel source, they also inhabited areas rich in reduced metals like iron and nickel. These first chemical reactions were also largely driven by an enzyme called acetyl coenzyme A synthase, or ACS, a molecule essential for energy production and forming new chemical bonds. 

But for years, scientists in the field have been split on how this enzyme actually works – whether the chemical reactions it spurred could be assembled randomly or if its chemical constructions followed a strict roadmap. Hannah Shafaat, co-author of the study and a professor in chemistry and biochemistry at The Ohio State University, said her team’s artificial model of the enzyme reveals a lot about how its native ancestor might have acted during Earth’s first few billion years.

Compared to what scientists find in nature, this model protein is much easier to study and manipulate. Because of this, the team was able to conclude that ACS does, in fact, have to build molecules one step at a time. Such information is crucial to understanding how organic chemistry on Earth began to mature. 

“Rather than taking the enzyme and stripping it down, we’re trying to build it from the bottom up,” Shafaat said. “And knowing that you have to do things in the right order can basically be a guide for how to recreate it in the lab.” 

As scientists hope to understand what may have emerged first out of the primordial soup, Shafaat said the study demonstrated that even simple enzymes like their model could have supported early life. Shafaat, who has worked on the project for nearly five years, said that while the study did run into some challenges, the lessons the team learned were worth it in the long run. 

In addition to being important for understanding primordial chemistry, their findings have broad implications for other fields, including the energy sector, Shafaat said. “If we can understand how nature figured out how to use these compounds billions and billions of years ago, we can harness some of those same ideas for our own alternative energy devices,” she said. 

At the moment, one of the biggest challenges the energy sector faces is making liquid fuel. Yet this study could be the first step in finding a natural energy source that could replace the gasoline and oil humans overuse, Shafaat said. Now, her team is working on streamlining their product, but will continue to investigate whether there are other primeval secrets their enzyme might divulge. 

Co-authors were Anastasia C. Manesis and Alina Yerbulekova of Ohio State, and Jason Shearer of Trinity University. This work was supported by the U.S Department of Energy. 

What makes Omicron more infectious than other COVID-19 variants

Researchers used virus-like particles to identify which mutations of the Omicron SARS-CoV-2 virus make it more effective at infecting cells and escaping antibodies

Peer-Reviewed Publication

GLADSTONE INSTITUTES

Gladstone researchers Abdullah Syed and Alison Ciling 

IMAGE: A TEAM OF RESEARCHERS, INCLUDING ABDULLAH SYED (LEFT) AND ALISON CILING (RIGHT), USED VIRUS-LIKE PARTICLES TO IDENTIFY WHICH PARTS OF THE SARS-COV-2 VIRUS ARE RESPONSIBLE FOR ITS INCREASED INFECTIVITY AND SPREAD. view more 

CREDIT: PHOTO: MICHAEL SHORT/GLADSTONE INSTITUTES

SAN FRANCISCO, CA—As the Omicron variant of SARS-CoV-2 spread rapidly around the globe earlier this year, researchers at Gladstone Institutes, UC Berkeley, and the Innovative Genomics Institute used virus-like particles to identify which parts of the virus are responsible for its increased infectivity and spread.

They also confirmed that antibodies generated against previous variants of the virus are much less effective against Omicron, but showed that recently boosted individuals have higher levels of effective antibodies. The research was published in the journal Proceedings of the National Academy of Sciences of the United States.

“The virus-like particle system lets us rapidly query new variants and get insight into whether their infectivity in cell culture is changed,” says Melanie Ott, MD, PhD, director of the Gladstone Institute of Virology and a senior author of the new study. “In the case of Omicron, it allowed us to get a much better handle on how, at a molecular level, this variant is different from others.”

“This approach is incredibly useful for quickly studying the effectiveness of prior antibodies and vaccines on a newly emerging viral strain,” says the study’s other senior author Jennifer Doudna, PhD, senior investigator at Gladstone, professor at UC Berkeley, founder of the Innovative Genomics Institute, and investigator of the Howard Hughes Medical Institute.

Virus-Like Particles Speed Omicron Research

Epidemiological data has suggested that the Omicron variant of SARS-CoV-2, first detected in November 2021 in South Africa, spreads between people more easily than the original strain of the virus. Compared to other variants, it has also caused more breakthrough infections—in people previously infected with or fully vaccinated against COVID-19.

Early in 2021, Ott’s and Doudna’s research groups developed virus-like particles to study the SARS-CoV-2 virus. These particles are composed of the membrane, envelope, nucleocapsid, and spike proteins that make up the viral particle’s structure. However, virus-like particles lack the virus’s genome, so they cannot infect people and are therefore safer to work with than live virus. Scientists can also engineer new virus-like particles much faster than they can grow new variants of the live virus to study.

In their previous work, the researchers showed that the efficiency of virus-like particle assembly was related to the infectivity of the corresponding full, live virus. For instance, if a virus-like particle carrying a certain mutation was more efficient at forming viral particles, a copy of the live virus with the same mutation was also more infectious based on cell culture experiments.

In recent months, the team developed virus-like particles to capture the effect of different mutations in the emerging Omicron variant of SARS-CoV-2.

Omicron mutations in the spike protein, they discovered, made virus-like particles twice as infectious as those with the ancestral spike protein. And virus-like particles carrying Omicron’s mutations in the nucleocapsid protein were 30 times more infectious than the ancestral SARS-CoV-2.

“There has been a lot of focus on spike, but we’re seeing in our system that for both Delta and Omicron, nucleocapsid is really more important in enhancing the spread of this virus,” says Ott. “I think if we want to generate better vaccines or look at blocking transmission of COVID-19, we might want to think about targets other than the spike protein.”

When the team made virus-like particles carrying Omicron mutations in the membrane or envelope proteins, they found that the particles were no more infectious than the ancestral virus-like particles; in fact, they were only about half as infectious as some other variants.

“Omicron has a lot of mutations, and our findings tell us that some of these mutations are actually harmful for the virus,” says Abdullah Syed, PhD, first author of the study and a postdoctoral fellow in Doudna’s lab at Gladstone. “But it also means that it could be possible for Omicron to evolve to be even more infectious if those brakes are lifted.”

How Omicron Escapes Antibodies

The researchers also tested the ability of antibodies to neutralize the SARS-CoV-2-like particles. They collaborated with the Innovation Team at Curative, which established a comprehensive serum biobank by administering over 2 million vaccinations across the US.

The team used serum of 38 people who had been vaccinated against COVID-19 or were unvaccinated but had recovered from the virus, as well as 8 people who had received a booster vaccine within the previous 3 weeks. Then, the researchers exposed the virus-like particles they had created to these serum samples to test their ability to neutralize the particles.

Sera from people vaccinated with the Pfizer/BioNTech or Moderna vaccine within the previous 4 to 6 weeks showed high levels of neutralization against virus-like particles of ancestral SARS-CoV-2, but levels of neutralization were 3 times lower for particles of the Delta variant, and about 15 times lower for Omicron virus-like particles. People vaccinated with the Johnson & Johnson vaccine or who recovered from COVID-19 showed low levels of neutralization against the ancestral virus-like particles, and little difference was apparent for the Delta and Omicron variants.

In addition, the researchers showed that within 2 to 3 weeks of receiving a third dose of Pfizer/BioNTech, all 8 boosted individuals in the study had detectable levels of antibodies capable of neutralizing all SARS-CoV-2 variants, including Omicron. However, levels of antibodies against Omicron were still 8 times lower than antibodies against the ancestral virus.

“Our findings support the idea that Omicron is much more capable of escaping our vaccine-induced immunity than previous strains of SARS-CoV-2,” says Ott. “It also underscores that booster shots from the mRNA vaccines seem to provide some degree of additional protection, even against Omicron.”

Additionally, when the team tested the monoclonal antibodies casirivimab and imdevimab (known commercially as REGEN-Cov), they found that the drugs showed high levels of neutralization against ancestral and Delta variants of SARS-CoV-2, but no detectable neutralization at all against the Omicron-like particles.

“We’re certainly not at a point where we fully understand this variant, but our data add to the growing evidence that it seems to be very good at infecting and very good at escaping antibodies,” says Syed.

###

About the Research Project

The paper “Omicron mutations enhance infectivity and reduce antibody neutralization of SARS-CoV-2 virus-like particles” was published in the journal Proceedings of the National Academy of Sciences of the United States on July 19, 2022.

Other authors are: Alison Ciling, Mir Khalid, Bharath Sreekumar, Renuka Kumar, Ines Silva, Taha Taha, Takako Tabata, and Irene Chen of Gladstone; and Bilal Milbes, Noah Kojima, Victoria Hess, Maria Shacreaw, Lauren Lopez, Matthew Brobeck, Fred Turner, and Lee Spraggon of Curative, Inc.

The work was supported by the National Institutes of Health (R21AI59666 and F31AI164671-01), the Howard Hughes Medical Institute, the National Sciences and Engineering research Council of Canada (NSERC PDF-533021-2019), the Roddenberry Foundation, and a gift from Pam and Ed Taft. 

About Gladstone Institutes

To ensure our work does the greatest good, Gladstone Institutes focuses on conditions with profound medical, economic, and social impact—unsolved diseases. Gladstone is an independent, nonprofit life science research organization that uses visionary science and technology to overcome disease. It has an academic affiliation with the University of California, San Francisco.

Alexa and Siri, listen up! UVA collab is teaching machines to really hear us

Reports and Proceedings

UNIVERSITY OF VIRGINIA

The UVA Researchers Behind SITHCon 

IMAGE: UNIVERSITY OF VIRGINIA GRADUATE STUDENT BRANDON JACQUES, LEFT, WORKED IN THE LAB OF PER SEDERBERG, RIGHT, TO PROGRAM A WORKING DEMO OF THE TECHNOLOGY. view more 

CREDIT: PHOTO BY DAN ADDISON, UNIVERSITY OF VIRGINIA COMMUNICATIONS

University of Virginia cognitive scientist Per Sederberg has a fun experiment you can try at home. Take out your smartphone and, using a voice assistant such as the one for Google’s search engine, say the word “octopus” as slowly as you can.

Your device will struggle to reiterate what you just said. It might supply a nonsensical response, or it might give you something close but still off – like “toe pus.” Gross!

The point is, Sederberg said, when it comes to receiving auditory signals like humans and other animals do – despite all of the computing power dedicated to the task by such heavyweights as Google, Deep Mind, IBM and Microsoft – current artificial intelligence remains a bit hard of hearing.

The outcomes can range from comical and mildly frustrating to downright alienating for those who have speech problems.

But using recent breakthroughs in neuroscience as a model, UVA collaborative research has made it possible to convert existing AI neural networks into technology that can truly hear us, no matter at what pace we speak.

The deep learning tool is called SITHCon, and by generalizing input, it can understand words spoken at different speeds than a network was trained on.

This new ability won’t just change the end-user’s experience; it has the potential to alter how artificial neural networks “think” – allowing them to process information more efficiently. And that could change everything in an industry constantly looking to boost processing capability, minimize data storage and reduce AI’s massive carbon footprint.

Sederberg, an associate professor of psychology who serves as the director of the Cognitive Science Program at UVA, collaborated with graduate student Brandon Jacques to program a working demo of the technology, in association with researchers at Boston University and Indiana University.

“We’ve demonstrated that we can decode speech, in particular scaled speech, better than any model we know of,” said Jacques, who is first author on the paper.

Sederberg added, “We kind of view ourselves as a ragtag band of misfits. We solved this problem that the big crews at Google and Deep Mind and Apple didn’t.”

The breakthrough research was presented Tuesday at the high-profile International Conference on Machine Learning, or ICML, in Baltimore.

Current AI Training: Auditory Overload

For decades, but more so in the last 20 years, companies have built complex artificial neural networks into machines to try to mimic how the human brain recognizes a changing world. These programs don’t just facilitate basic information retrieval and consumerism; they also specialize to predict the stock market, diagnose medical conditions and surveil for national security threats, among many other applications.

“At its core, we are trying to detect meaningful patterns in the world around us,” Sederberg said. “Those patterns will help us make decisions on how to behave and how to align ourselves with our environment, so we can get as many rewards as possible.”

Programmers used the brain as their initial inspiration for the technology, thus the name “neural networks.”

“Early AI researchers took the basic properties of neurons and how they’re connected to one another and recreated those with computer code,” Sederberg said.

For complex problems like teaching machines to “hear” language, however, programmers unwittingly took a different path than how the brain actually works, he said. They failed to pivot based on developments in the understanding of neuroscience.

“The way these large companies deal with the problem is to throw computational resources at it,” the professor explained. “So they make the neural networks bigger. A field that was originally inspired by the brain has turned into an engineering problem.” 

Essentially, programmers input a multitude of different voices using different words at different speeds and train the large networks through a process called back propagation. The programmers know the responses they want to achieve, so they keep feeding the continuously refined information back in a loop. The AI then begins to give appropriate weight to aspects of the input that will result in accurate responses. The sounds become usable characters of text.

“You do this many millions of times,” Sederberg said.

While the training data sets that serve as the inputs have improved, as have computational speeds, the process is still less than ideal as programmers add more layers to detect greater nuances and complexity – so-called “deep” or “convolutional” learning.

More than 7,000 languages are spoken in the world today. Variations arise with accents and dialects, deeper or higher voices – and of course faster or slower speech. As competitors create better products, at every step, a computer has to process the information.

That has real-world consequences for the environment. In 2019, a study found that the carbon dioxide emissions from the energy required in the training of a single large deep-learning model equated to the lifetime footprint of five cars.

Three years later, the data sets and neural networks have only continued to grow.

How the Brain Really Hears Speech

The late Howard Eichenbaum of Boston University coined the term “time cells,” the phenomenon upon which this new AI research is constructed. Neuroscientists studying time cells in mice, and then humans, demonstrated that there are spikes in neural activity when the brain interprets time-based input, such as sound. Residing in the hippocampus and other parts of the brain, these individual neurons capture specific intervals – data points that the brain reviews and interprets in relationship. The cells reside alongside so-called “place cells” that help us form mental maps.

Time cells help the brain create a unified understanding of sound, no matter how fast or slow the information arrives.

“If I say ‘oooooooc-toooooo-pussssssss,’ you’ve probably never heard someone say ‘octopus’ at that speed before, and yet you can understand it because the way your brain is processing that information is called ‘scale invariant,’ Sederberg said. “What it basically means is if you’ve heard that and learned to decode that information at one scale, if that information now comes in a little faster or a little slower, or even a lot slower, you’ll still get it.”

The main exception to the rule, he said, is information that comes in hyper-fast. That data will not always translate. “You lose bits of information,” he said.

Cognitive researcher Marc Howard’s lab at Boston University continues to build on the time cell discovery. A collaborator with Sederberg for over 20 years, Howard studies how human beings understand the events of their lives. He then converts that understanding to math.

Howard’s equation describing auditory memory involves a timeline. The timeline is built using time cells firing in sequence. Critically, the equation predict that the timeline blurs – and in a particular way – as sound moves toward the past. That’s because the brain’s memory of an event grows less precise with time.

“So there’s a specific pattern of firing that codes for what happened for a specific time in the past, and information gets fuzzier and fuzzier the farther in the past it goes,” Sederberg said. “The cool thing is Marc and a post-doc going through Marc’s lab figured out mathematically how this should look. Then neuroscientists started finding evidence for it in the brain.”

Time adds context to sounds, and that’s part of what gives what’s spoken to us meaning. Howard said the math neatly boils down.

“Time cells in the brain seem to obey that equation,” Howard said.

UVA Codes the Voice Decoder

About five years ago, Sederberg and Howard identified that the AI field could benefit from such representations inspired by the brain. Working with Howard’s lab and in consultation with Zoran Tiganj and colleagues at Indiana University, Sederberg’s Computational Memory Lab began building and testing models.

Jacques made the big breakthrough about three years ago that helped him do the coding for the resulting proof of concept. The algorithm features a form of compression that can be unpacked as needed – much the way a zip file on a computer works to compress and store large-size files. The machine only stores the “memory” of a sound at a resolution that will be useful later, saving storage space.

“Because the information is logarithmically compressed, it doesn’t completely change the pattern when the input is scaled, it just shifts over,” Sederberg said.

The AI training for SITHCon was compared to a pre-existing resource available free to researchers called a “temporal convolutional network.” The goal was to convert the network from one trained only to hear at specific speeds.

The process started with a basic language – Morse code, which uses long and short bursts of sound to represent dots and dashes – and progressed to an open-source set of English speakers saying the numbers 1 through 9 for the input.

In the end, no further training was needed. Once the AI recognized the communication at one speed, it couldn’t be fooled if a speaker strung out the words.

“We showed that SITHCon could generalize to speech scaled up or down in speed, whereas other models failed to decode information at speeds they didn’t see at training,” Jacques said.

Now UVA has decided to make its code available for free, in order to advance the knowledge. The team says the information should adapt for any neural network that translates voice.

“We’re going to publish and release all the code because we believe in open science,” Sederberg said. “The hope is that companies will see this, get really excited and say they would like to fund our continuing work. We've tapped into a fundamental way the brain processes information, combining power and efficiency, and we’ve only scratched the surface of what these AI models can do.”

But knowing that they’ve built a better mousetrap, are the researchers worried at all about how the new technology might be used?

Sederberg said he’s optimistic that AI that hears better will be approached ethically, as all technology should be in theory.

“Right now, these companies have been running into computational bottlenecks while trying to build more powerful and useful tools,” he said. “You have to hope the positives outweigh the negatives. If you can offload more of your thought processes to computers, it will make us a more productive world, for better or for worse.”

Jacques, a new father, said, “It’s exciting to think our work may be giving birth to a new direction in AI.”

Understanding Thermal Forces in Rail

  • Written by Gary T. Fry, Vice President, Fry Technical Services, Inc.  July 09, 2022

    FIGURE 1. Photograph of a bright June sun. Summer heat and winter cold can induce forces in railroad rail large enough to cause track failures. (Courtesy of Gary T. Fry.)

    RAILWAY AGE, JULY 2022 ISSUE: How temperature variations can cause sun kinks and pull-aparts

    Welcome to “Timeout for Tech with Gary T. Fry, Ph.D., P.E.” Each month in this series, we examine a technology topic that professionals in the railway industry have asked to learn more about. This month we discuss how temperature variations can cause forces in railroad rail, even when trains are not present. 

    Figure 1 (above) is a photograph of the sun on a bright day in June. How is it that the heat of a summer day can cause several yards of railroad track to suddenly shift sideways out of alignment by a couple feet or more? Conversely, why does a rail suddenly snap in two on a cold winter day? Several scientific principles are in play to answer these questions, but one stands out as the most significant. Rail steel, like most metal materials, expands when heated and contracts when cooled.

    There’s a fun way to think about this phenomenon. Being connected by rail lines, shouldn’t Chicago Union Station and St. Louis Union Station be pulled closer to one another in the winter and pushed farther apart in the summer? For example, over the 260-mile distance between the two stations (as the crow flies), rail laid stress-free at 65-degrees Fahrenheit will be 310 feet longer on a 100-degree summer afternoon and 490 feet shorter on a 10-degree winter night. That’s a swing of 800 feet! Setting amusement aside, what really happens?

    FIGURE 2. Schematic drawings of track that illustrate the combined effects of temperature variation and track support conditions. (Courtesy of Gary T. Fry.)

    When rail is subjected to temperature variations, it moves relative to the ground surface. But it does not move freely because it is connected to ties that are buried in ballast. Let’s look at this more closely. Figure 2 shows five schematic drawings of the same short segment of track. In the drawings, the rail is attached to the ties in such a way that the rail and ties move exactly together. Drawing C, shaded green, illustrates the track segment at its natural, unstressed length: that is, at its rail neutral temperature or RNT. Drawing B shows the segment at its freely expanded, unrestrained length associated with some temperature above its RNT. Drawing D shows the segment at its freely contracted, unrestrained length associated with some temperature below its RNT. When considering drawings B and D, we should imagine that the track segment is not installed in ballast, rather the bottoms of the ties rest upon a frictionless surface. This allows the steel rail to assume the length it would naturally have at the given temperatures above or below its RNT. Stated another way, drawings B and D show the rail “where it wants to be” at those temperatures—longer and stress-free when heated or shorter and stress-free when cooled.

    Now consider that the ties are confined within consolidated ballast, and, by various means, the rail is attached to the ties. Consequently, the rail cannot be “where it wants to be.” Drawing A, shaded red in Figure 2, shows where the rail winds up being at a given temperature above its RNT. The rail is held shorter than “where it wants to be” by some amount that depends upon the movement between the ties and ballast and the movement between the rail and the ties. It is being restrained and experiences axial compressive force as a result. Hence an axial compression force demand is being placed on the track structure because of the increase in temperature.

    Drawing E, shaded blue in Figure 2, shows where the rail winds up being at a given temperature below its RNT. The rail is held longer than “where it wants to be” by some amount that similarly depends upon the movement between the ties and ballast and the movement between the rail and the ties. It is being restrained and experiences axial tensile force as a result. This time, an axial tension force demand is being placed on the track structure.

    Those are the essential features through which temperature variations can cause axial compression and axial tension force demands in railroad rail. But we are just scratching the surface with that introduction. The specific magnitudes of rail forces that develop depend upon several details, many of which can exhibit substantial variability: for example, track type (jointed rail or continuously welded rail), rail size, rail neutral temperature, rail temperature (as opposed to ambient temperature), track geometry, proximity to special trackwork, proximity to a bridge, proximity to a grade crossing, rail anchorage arrangements, ballast condition, tie and fastener type and condition, etc. In combination, these details, their complex interactions, and their inherent variability, make it difficult to predict thermal forces in rail accurately and precisely.

    Our discussion to this point has focused on the demand placed on the track structure because of rail temperature variation: that is, axial compression forces and axial tension forces. As a structural system, railroad track has capacity to resist these demands, but if the axial compression demand placed on the track exceeds its axial compression capacity, the track will likely experience failure in the form of an instability called buckling. The physical appearance of a track buckle is a significant lateral shift out of alignment. This failure mode is commonly termed a “sun kink,” because it happens most frequently on hot, sunny days, and it looks a bit like a kink in the rail when viewed from above.

    Railroad track also has a finite capacity to resist axial tension. If its axial tension capacity is exceeded, the rail will break. This failure mode is commonly termed a “pull-apart,” because the tension in the rail results in formation of a gap between the broken rail ends, giving an appearance that the ends were pulled apart. Of special concern, if a rail contains a fatigue defect, its axial tension capacity is reduced and also becomes temperature dependent, being lower at lower temperatures. Hence, for rail with fatigue defects, cold weather brings the compounding effects of increased demand and reduced capacity. Rail that does not contain fatigue defects has negligible risk of failure, even under the most severe thermal tension forces that can be associated with winter conditions.

    There are several options available to mitigate risk of failure associated with thermal forces in rail. For example, it is possible to adjust the RNT of rail. In the summer when the rail “wants to be longer,” sections of rail can be removed, shortening the rail, increasing the RNT, and lowering the compression forces. Conversely, RNT can be lowered for winter conditions by adding sections to the contracting rail, thereby reducing axial tension forces. Done strategically, RNT management can have the effect of reducing peak seasonal demands.

    FIGURE 3. Schematic drawing of demand and capacity relationships with constant capacity and time-varying demand. (Courtesy of Gary T. Fry.)

    Figure 3 is a schematic diagram illustrating the effect of adjusting RNT to control seasonal demand. The blue region in Figure 3 represents capacity as a range around a central average value. It is assumed that the capacity of the rail does not change over the period represented. The yellow region represents demand as a range with a central average value. Periodic reductions in demand are indicated where RNT adjustments are made. Red markers indicate times when failure is likely: that is, when the upper reaches of demand exceed the lower reaches of capacity. Adjusting RNT is currently the most common method of mitigating sun kinks and pull-aparts. This would be considered an example of demand control to mitigate risk.

    Although demand management can be an effective approach to mitigate risk of failure, it is also possible to increase capacity through relevant design, construction and maintenance procedures. Theoretically, if the safe design compression capacity of track to resist sun kinks can be increased adequately, the need for seasonal RNT adjustments could be nearly eliminated, while simultaneously decreasing risk of failure from sun kinks and pull-aparts. This would be considered a capacity control risk management approach.

    In summary, sun kinks and pull-aparts are track failure modes associated with thermal forces that can develop in rail: axial compression and axial tension forces, respectively. The risk of these failures can be mitigated through demand management by adjusting rail neutral temperature. The failure risk can also be mitigated through capacity management, optimally by adequate increases to the axial compression capacity of the track system.

    Dr. Fry is Vice President of Fry Technical Services, Inc. (https://www.frytechservices.com). He has 30 years of experience in research and consulting on the fatigue and fracture behavior of structural metals and weldments. His research results have been incorporated into international codes of practice used in the design of structural components and systems including structural welds, railway and highway bridges, and high-rise commercial buildings in seismic risk zones. He has extensive experience performing in situ testing of railway bridges under live loading of trains, including high-speed passenger trains and heavy-axle-load freight trains. His research, publications and consulting have advanced the state of the art in structural health monitoring and structural impairment detection. 

    OptiFuel Lands Locomotive Repower Project in Argentina

    Written by Marybeth Luczak, Executive Editor
    Repowering existing freight locomotives to zero NOx, PM and CO2 emissions using RNG. (Graphic and caption details, courtesy of Business Wire and OptiFuel)

    Repowering existing freight locomotives to zero NOx, PM and CO2 emissions using RNG. (Graphic and caption details, courtesy of Business Wire and OptiFuel)

    Beaufort, S.C.-based OptiFuel Systems has signed a collaboration agreement with the Argentina Ministry of Transportation through Ferrocarriles Argentinos Sociedad del Estado (F.A.S.E.) to upgrade 400 switcher and line-haul freight locomotives from diesel to compressed natural gas (CNG) and/or renewable natural gas (RNG) power.

    OptiFuel on July 12 reported that it is developing modular repower kits for diesel locomotives and new locomotives in all lengths, horsepower levels and track gauges. For the Argentina project, it said the kits will include OptiFuel’s zero-emission CNG/RNG engines pods in a hybrid configuration (1,500 hp-4,500 hp); locomotive control modules; and onboard CNG/RNG storage pods that can carry up to 2,000 diesel gallon equivalents (DGEs) of natural gas. Additionally, OptiFuel will provide powered tender cars (3,000 hp) that can carry 11,500 DGEs of CNG/RNG, and construct about 12-15 CNG fuel stations along Argentina’s rail network, each with the capacity to refuel a tender car in less than an hour. The kits will be built in the United States and shipped for local assembly in Argentina.

    OptiFuel’s zero-emission rail engine solutions are powered by its EPA-certified locomotive CNG/RNG engine rated at 0.00 g/bhp-hr for NOx, 0.000 g/bhp-hr for PM, and Negative CO2 using RNG, according to the company.

    “OptiFuel is excited about the opportunity to provide zero-emission locomotives, tenders and refueling equipment to Argentina,” OptiFuel President Scott Myers said. “OptiFuel developed and certified these technologies for rail because we believe there is a need for cleaner locomotives that drive increasing value to the world’s railroads, railroad customers and communities.”

    “Our Transportation Modernization Plan is about developing more infrastructure throughout the country, and it is also about innovative technology—such as this change in the energy matrix of our trains, to make them cleaner, more efficient, sustainable and cheaper,” Argentina Transport Minister Alexis Guerrera said.

    BHP to speed up US$5.7 billion Jansen potash mine

    By Cecilia Jamasmie July 19, 2022 

    BHP is working to accelerate Jansen Stage 1 first production into 2026.
     (Image courtesy of BHP.)

    BHP (ASX: BHP) is seeking to accelerate construction at its US$5.7 billion Jansen potash project in Canada as high gas prices and sanctions on key exporters continue to disrupt global supplies of fertilizers.

    The world’s largest miner had originally planned to kick off production at Jansen in 2027. Markets conditions, however, have prompted it to attempt bringing forward Stage 1 first production into 2026, which is expected to yield 4.35 million tonnes of potash per year.

    The company also said that is evaluating options to accelerate Stage 2, which would add an additional 4 million tonnes per year, at a capital intensity of between US$800 and US$900 per tonne, almost 30% lower than expected for Stage 1.

    “BHP is trying to accelerate first tonnes at Jansen, but it still seems best case is first tonnes come late 2026 with a two-year ramp,” BMO Fertilizers and Chemicals analyst, Joel Jackson, wrote in May.

    “We believe BHP needs to hire about 600 miners for Jansen with the labour per tonne deemed lower than [competitors] Nutrien and Mosaic’s incumbent mines as BHP expected to employ less equipment per tonne and other innovation,” Jackson noted.

    The company completed installing the production and service shafts required for the project, which came at a US$2.97 billion cost, BHP said Tuesday in its operational review for the year ended June 30.

    Quarter of global supply


    Potash is seen by farmers as an attractive resource because of its use as fertilizer, which also boosts drought tolerance and improves crop quality.

    BHP expects potash demand to increase by 15 million tonnes to roughly 105 million tonnes by 2040 or 1.5% to 3% a year, along with the global population and pressure to improve farming yields given limited land supply.

    Jansen had the potential to produce 17 million tonnes a year under a four phased development. This would account for about 25% of current global potash demand.

    “If we decide to bring on all four stages, and at prices just half of where they are today, we’d be generating around US$4 billion to US$5 billion of EBITDA [earnings before interest, depreciation, tax and amortization] per year,” chief executive Mike Henry said at a mining conference in May.

    This compares to a five-year average of US$3 billion a year from the miner’s petroleum business.

    BHP had tried to tap into the fertilizers market for some time. In 2010, it unsuccessfully bid US$38.6 billion for Potash Corp. of Saskatchewan, which in 2018 merged with Agrium Inc. to form Nutrien (TSE, NYSE: NTR).

    The ongoing war in Ukraine has left the world not only short of important grains but also fertilizers since neighbours Russia and Belarus account for almost 40% of global production.

    Crop nutrients have become more expensive as an increase in natural gas prices has caused costs to soar.

    Given the current political climate, as well as the continuing effects of the Covid-19 pandemic worldwide, BHP is expecting the current supply chain issues in the mining sector to take up to three years to resolve.
    Barrick Gold, Pakistan set copper-gold project funding structure

    Cecilia Jamasmie | July 19, 2022 |

    The Reko Diq deposit is located in the Balochistan province, pictured here.
     
    (Image by Michael Foley, Flickr Commons.)

    Barrick Gold (TSX: ABX)(NYSE: GOLD) and the Pakistan government have agreed on the structure governing the funding and profit-sharing of the $7 billion Reko Diq copper-gold deposit in the province of Balochistan.


    The agreement in principle sets a partnership between Barrick, the Balochistan Provincial Government and Pakistani state-owned enterprises, the company said on Tuesday.

    The operation will be owned 50% by Barrick, 25% by the Province and 25% by Pakistani state-owned enterprises. Once the definitive agreements are finalized, Barrick will update the unpublished 2010 feasibility study.

    The structure, the gold miner said, ensures that Balochistan receives a “substantial” share of the benefits generated by the operation.

    “Balochistan’s shareholding in Reko Diq will be fully funded by its partners and the federal government, which means that the province will reap the dividends, royalties and other benefits of its 25% ownership without having to contribute financially to the construction and operation of the mine,” chief executive Mark Bristow said in the statement.

    Barrick said it will implement a range of social development programs before final deals are finalized and vowed to spend $70 million over the construction period. This includes upfront commitments of up to $3 million in the first year following closing, and up to $7 million in the second year.

    The operation will also advance royalties to Balochistan’s government of up to $5 million in the first year following closing, up to $7.5 million in the second year, and up to $10 million per year thereafter until commercial production starts. This is subject to a cumulative $50 million maximum of advance payments, Barrick said.

    The Reko Diq project, which hosts one of the world’s largest undeveloped open pit copper-gold deposits, has been on hold since 2011 due to a dispute over the legality of its licencing process.

    Barrick solved the long-running dispute earlier this year, reaching a preliminary out-of-court deal that cleared the path for a final agreement on how to run the mine and profit-sharing arrangements.

    The project is now seeking financing partners, with a target of 50% debt to total capitalization.

    The company plans to deliver production as early as 2027-2028 from Phase 1 at a cost of around $4 billion, with Phase 2 to follow in five years and a cost of roughly $3 billion.

    The miner noted that construction of the first phase Reko Diq, close to the borders of Iran and Afghanistan, will follow the study.
    Two-phase development

    The conceptual design calls for an open pit to be built in two phases, starting with a plant that will be able to process approximately 40 million tonnes of ore per annum, which could be doubled in five years.

    The latest plan is double the annual throughput capacity and more than twice the investment estimated in an unpublished 2010 feasibility study.

    During peak construction the project is expected to employ 7,500 people and once in production it will create 4,000 long-term jobs during the expected 40-year life of the mine.

    Some analysts believe that Pakistan’s lack of experience in mining and its political instability make this a risky deal.

    Bristow, however, said in May that he had worked in challenging situations all his life and that he was “very comfortable” with the project. He added that this was the “perfect opportunity for the mining industry to demonstrate what it can bring to an economy” of a region that has been “neglected” and struggles to get access to potable water.
    Gemfields on alert as insurgent attacks creep closer to Montepuez

    Cecilia Jamasmie | July 20, 2022 | 

    Montepuez is an open-pit mine, considered the world’s most lucrative ruby operation. 
    (Image courtesy of Gemfields.)

    Africa-focused Gemfields (LON: GEM) (JSE: GML) warned on Wednesday that insurgents attacks are edging closer to its ruby mine in Mozambique’s northern Cabo Delgado province but said operations had not been halted.


    An Islamic State-linked insurgency broke out in October 2017 in Cabo Delgado, a coastal province rich in natural gas reserves and host to an estimated $60 billion worth of international investment in gas projects.

    The violence has so far left at least 3,100 dead, according to the Armed Conflict Location & Event Data Project (ACLED), which tracks political violence around much of the world.

    Conflict there also has displaced nearly 856,000 people, nearly half of them children, according to UNICEF.


    Gemfields said the latest attack hit the Muaja village, which is about 30km by road from its Montepuez ruby mine. A previous incident last month occurred about 65km east-north-east from the operation, in which the company holds a 75% interest.

    “A large number of people are reportedly relocating to Nanhupo and Namanhumbir, where the mining operations are located,” Sean Gilbertson, CEO of Gemfields, said in the statement. “Given recent developments and the associated security review, operations continue with increased vigilance,” he added.

    Violence has also affected other miners in the region recently. In June, Australia’s Triton Minerals (ASX: TON) reported an attack on its Ancuabe graphite project site. Syrah Resources (ASX: SYR) briefly suspended logistics and staff movement at its flagship Balama graphite operation due to assaults close to its primary transport route.

    Montepuez Ruby Mining Limitada (MRM) produced 83,990 carats of premium rubies in 2021 and has generated $827.1 million in sales since 2014.

    Kenorland Minerals options Alaska copper project to Antofagasta

    Cecilia Jamasmie | July 20, 2022 | 

    Image from Kenorland Minerals.

    Canadian junior Kenorland Minerals (TSX-V: KLD) has inked an earn-in agreement with Antofagasta (LON: ANTO) that gives the Chilean miner an option to acquire a 70% interest in the Tanacross copper-gold project in Alaska.


    Antofagasta could become the project’s majority owner by spending $30 million on exploration over eight years and delivering a preliminary economic assessment on the asset, near the Yukon-Alaska border.

    The Santiago-based miner will also make cash payments of $1 million to Kenorland and another $4 million upon exercise of the option.

    During the option period, Antofagasta will fund all exploration and Kenorland will be the initial operator, the companies said.

    “The property, which covers numerous mineralised systems and target areas, warrants significant exploration to unlock the discovery potential that we believe exists,” Kenorland Minerals chief executive Zach Flood said in the statement.

    Once Antofagasta has earned its 70% interest, the companies will form a 70:30 joint venture. If either party’s interest in the JV falls below 10%, its interest will convert into a 2% net smelter returns (NSR) royalty of which a 0.5% NSR can be purchased by the other party for $2 million.
    Back to North America

    Antofagasta’s move comes as the company reported a drop in production for the second quarter of the year, which forced it to lower its full-year output target to 640,000-660,000 tonnes.

    Becoming involved in the Tanacross project marks the company’s come-back to North America. In January, Antofagasta lost its battle on its proposed Twin Metals underground copper-nickel mine and processing facility in Minnesota.

    The US Department of the Interior cancelled two mineral leases for Antofagasta’s proposed mine, effectively killing the project and handing a major win to environmentalists.

    Kenorland currently holds three projects in Quebec where work is being completed under joint venture and earn-in agreement from third parties.

    Drought, operational issues bring Antofagasta guidance down

    Antofagasta had disclosed in early June a leak in a pipeline at its flagship Los Pelambres copper mine’s concentrator plant.

    It had also said that the operation had been one of the company’s mines hardest hit by the lack of rainfall in the home country.

    Copper miners across Chile have been forced in recent years to find alternative means to feed water to their mines as the country’s longest drought in decades and receding aquifers have hampered operations. Many have sharply reduced use of continental freshwater or turned to desalination plants.

    The country’s copper agency Cochilco estimates that mining’s use of seawater — either used directly or desalinated — will increase 167% by 2032, while freshwater use will decline 45%. By the end of that period, 68% of water used by the industry will come from the ocean, the agency has said.

    Antofagasta noted that throughput at Los Pelambres was 36.2% lower than in the first half of 2021, while grades at its Centinela Concentrates were 35.4% lower.

    Second-quarter output fell by 6.5% quarter-on-quarter to 129,800 tonnes. The concentrate pipeline incident at Los Pelambres reduced production by about 23,000 tonnes in the quarter.

    RELATED: Kenorland Minerals options Alaska copper project to Antofagasta

    Antofagasta had previously said it expected to produce between 660,000 and 690,000 tonnes in 2022.

    “Looking to the second half of the year, we expect production to increase quarter-on-quarter as throughput recovers at Los Pelambres with increased water availability, grades improve at Centinela Concentrates and as the copper in concentrates, stockpiled at Los Pelambres’ concentrator plant, is moved to the port,” chief executive Iván Arriagada said.

    The company, majority-owned by Chile’s Luksic family, one of the country’s wealthiest, highlighted that from April this year all mining operations have been operating solely using renewable energy. This has “significantly” reduced the company’s Scope 2 emissions, Antofagasta said, without providing further details.

    Expansion

    Antofagasta is close to finishing a much-needed $2.2 billion expansion of Los Pelambres. The project, which was 82% completed by the end of the second quarter, will add 60,000 tonnes of copper a year over the first 15 years to the company’s overall production. 

    The plan includes boosting throughput at the plant from 175,000 tonnes of ore a day to an average of 190,000 tonnes a day.

    It also contemplates the construction of a desalination plant and water pipeline, which is scheduled to be completed in early 2023.

    The facility will benefit the existing operation in cases of prolonged or severe drought, such as the one currently hitting miners and wine makers alike. It could also be used for a potential further expansion, which may follow if Antofagasta can secure the required environmental and regulatory approvals.