Sunday, December 08, 2024

 

To remember conversations, keep making new brain cells


USC-led study of patients with epilepsy shows how making new neurons benefits cognition in adults.



Keck School of Medicine of USC

Newborn neuron in adult brain tissue 

image: 

Newborn neuron (green and purple cell) in brain tissue from patients with epilepsy. 

view more 

Credit: Aswathy Ammothumkandy/Bonaguidi Lab/USC Stem Cell




Why do adults make new brain cells? A new study published in Cell Stem Cell provides the first cellular evidence that making new brain cells in adults supports verbal learning and memory, which enables people to have conversations and to remember what they hear. This discovery could point to new approaches to restore cognitive function. 

The study, led by scientists from USC Stem Cell and the USC Neurorestoration Center at the Keck School of Medicine of USC, relied on brain tissue from patients with drug-resistant cases of mesial temporal lobe epilepsy (MTLE), which involves seizures as well as accelerated cognitive decline. 

“Treating patients with epilepsy allows us to investigate the purpose of generating new neurons in our brains. We observe that one of reasons is to learn from the conversations we have” said co-corresponding author Michael Bonaguidi, an associate professor of stem cell biology and regenerative medicine, gerontology, biochemistry and molecular medicine, biomedical engineering, and neurological surgery, and assistant director of the USC Neurorestoration Center. 

“These findings are clearly important for all people who suffer from learning and cognitive decline, but they are also specifically relevant to the epilepsy patients who participated in the research,” added co-corresponding author Charles Liu, a professor of neurological surgery, neurology, and biomedical engineering, director of the USC Neurorestoration Center, and director of the USC Epilepsy Care Consortium.

In the study, first authors Aswathy Ammothumkandy and Luis Corona from USC and their collaborators investigated how the process of making new brain cells—called neurogenesis—affects different types of cognitive decline during the progression of MTLE.

The researchers found that MTLE patients experience cognitive decline in many areas including verbal learning and memory, intelligence, and visuospatial skills. For verbal learning and memory, as well as for intelligence, patients undergo a dramatic decline during the first 20 years of seizures. During those same two decades, neurogenesis slows to the point where immature brain cells became nearly undetectable. 

Based on these observations, the scientists searched for links between the number of immature brain cells and the major areas of MTLE-related cognitive decline. They found the strongest association occurs between the declining number of immature brain cells and verbal learning and memory. 

This is a surprising finding because neurogenesis levels in rodents and other lab animals contribute to a different type of learning and memory using visuospatial skills. The role of neurogenesis in verbal learning and memory highlights the value of studying human brain tissue. These highly valuable surgical specimens were generously donated by patients of the Rancho Los Amigos Epilepsy Center- a unique resource in the public safety-net health system advancing health care and research equity for the underinsured population in the region. During the complex operations, the neurosurgeons carefully removed the affected hippocampus in one piece, curing the majority of the patients of their seizures.

“Our study provides the first cellular evidence of how neurogenesis contributes to human cognition—in this case, verbal learning and memory,” said Bonaguidi. “This work opens a gateway for future studies exploring ways to improve verbal learning and memory by boosting neurogenesis, possibly through exercise or therapeutic drugs. Those approaches could help not only patients with MTLE, Alzheimer’s disease and dementia, but also all of us with aging brains.”

Neuropsychologist Jason Smith from the Medical University of South Carolina is also a co-corresponding author. Additional authors are: Kristine Ravina, Victoria Wolseley, Jeremy Nelson, Nadiya Atai, Aidin Abedi, Lina D’Orazio, Alisha Cayce, Carol McClearly, George Nune, Laura Kalayjian, Darrin Lee, Brian Lee, Christianne Heck, Robert Chow, and Jonathan Russin from USC; Nora Jimenez from Los Angeles General Medical Center; Michelle Armacost from USC and Rancho Los Amigos National Rehabilitation Center; and Virginia Zuverza-Chavarria from Rancho Los Amigos National Rehabilitation Center.

Thirty percent of this work was supported by federal funding from the National Institutes of Health (grants R56AG064077, R01AG076956, and U01MH098937). Additional support came from the Donald E. and Delia B.Baxter Foundation, L.K. Whittier Foundation, Simon-Strauss Foundation, Cure Alzheimer’s Fund, Eli andEdythe Broad Foundation, USC Neurorestoration Center, Rudi Schulte Research Institute, American Epilepsy Society, and California Institute for Regenerative Medicine.

 

Estimation of US cancer deaths averted from prevention, screening, and treatment efforts, 1975-2020



JAMA Oncology




About The Study:

 In this model-based study using population-level cancer mortality data, an estimated 5.94 million cancer deaths were averted for breast, cervical, colorectal, lung, and prostate cancers combined from 1975 to 2020. Prevention and screening accounted for 8 of every 10 averted deaths, and the contribution varied by cancer site. Despite progress, efforts to reduce the U.S. cancer burden will require increased dissemination of effective interventions and new technologies and discoveries. 



Corresponding Author: To contact the corresponding author, Katrina A. B. Goddard, PhD, email katrina.goddard@nih.gov.

 To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamaoncol.2024.5381)

Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

Embed this link to provide your readers free access to the full-text article This link will be live at the embargo time https://jamanetwork.com/journals/jamaoncology/fullarticle/10.1001/jamaoncol.2024.5381?guestAccessKey=b09bd8fb-cf28-4cda-be14-5d139146d1d5&utm_source=For_The_Media&utm_medium=referral&utm_campaign=ftm_links&utm_content=tfl&utm_term=120524

 

Satellite-based and street-view green space and adiposity in US children


adding specific green space components as an urban planning and public health intervention strategy to combat the prevalence of childhood obesity in the U.S.

JAMA Network Open




About The Study: 

The results of this cohort study of U.S. children suggest that higher levels of satellite-based normalized difference vegetation index greenness and percentages of street-level green space components (flowers, plants, and fields) were associated with lower adiposity. The findings support the exploration of increasing residential green space levels and adding specific green space components as an urban planning and public health intervention strategy to combat the prevalence of childhood obesity in the U.S.


Corresponding Author: To contact the corresponding author, Li Yi, PhD, email li_yi@hsph.harvard.edu.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamanetworkopen.2024.49113)

Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

Embed this link to provide your readers free access to the full-text article This link will be live at the embargo time http://jamanetwork.com/journals/jamanetworkopen/fullarticle/10.1001/jamanetworkopen.2024.49113?utm_source=For_The_Media&utm_medium=referral&utm_campaign=ftm_links&utm_term=120524

Disclaimer: AAAS and Eurek

 SPAGYRIC HERBALISM

Method to enhance solubility of pea protein favors its use in food and beverages



The strategy developed at the State University of Campinas consists of submitting the ingredient to heat treatment and combining it with guarana extract and vitamin D. The result could become an alternative to animal products.



Fundação de Amparo à Pesquisa do Estado de São Paulo

Method to enhance solubility of pea protein favors its use in food and beverages 

image: 

Oil-in-water emulsions stabilized with pea protein and guarana extract

view more 

Credit: Rosiane Lopes da Cunha & Marluci Palazzolli da Silva Padilha




Research conducted at the State University of Campinas (UNICAMP) in São Paulo state, Brazil, shows that heat treatment of pea protein and addition of guarana extract result in a compound with significant potential to be used as an ingredient of plant-based beverages, offering a healthy and nutritious option for the food industry. 

The pea protein combined with guarana extract was found to stabilize an oil-in-water emulsion enriched with vitamin D3. 

The researchers who carried out the study, which was supported by FAPESP, are affiliated with the Process Engineering Laboratory at the School of Food Engineering (FEA-UNICAMP).

An article describing their findings is published in the journal Food Research International.

“Interest in plant-based proteins has grown in response to the boom in demand for foods of non-animal origin. Their growing use in food formulations is associated with technological properties such as the capacity to stabilize emulsions, form gel or foam, boost satiety, and supply essential amino acids,” said food engineer Rosiane Lopes da Cunha, last author of the article and full professor at FEA-UNICAMP.

However, water solubility of plant-based proteins is generally poor, a problem that impairs their properties and hinders their inclusion in food products. Scientists have therefore sought ways to improve solubility, some of which involve heat treatment and conjugation with extracts from plants rich in phenolic compounds, such as guarana (Paullinia cupana).

“The addition of guarana extract is an innovative strategy designed to valorize a product of the Amazon rich in bioactive compounds that interact with pea protein to enhance its capacity to stabilize emulsions,” said Marluci Palazzolli da Silva Padilha, corresponding author of the article and a postdoctoral researcher at FEA-UNICAMP. 

Pea protein was chosen for its attractive properties, especially low cost, low allergenicity, emulsion and gelation, but commercial use of the isolate faces challenges such as unpleasant taste (off-flavor) and gritty texture, as well as the already noted poor water solubility, preventing its use in many food and drink products.

In the study, the researchers set out to verify whether heat treatment and conjugation with guarana extract altered the properties of pea protein so as to support its inclusion in food product formulations.

Methodology

In the first stage of the project, the researchers analyzed the technological changes undergone by pea protein as a result of heat treatment and conjugation with guarana extract. In the second stage, they prepared an emulsion using modified pea protein and vitamin D3. While vitamin D3 supplementation boosts the immune system and prevents rickets, it is unstable in water-based beverages and therefore requires conjugation with a stabilizer.

The researchers examined the impact of storing emulsions at 25 °C in the presence of ultraviolet light (UV). “The results showed that at least 77% of the vitamin D3 was preserved in these formulations after 30 days of storage,” Padilha said.

Lastly, an experiment designed to simulate the process of digestion was performed in order to assess the bioavailability of the vitamin D3. In this third stage, the researchers concluded that vitamin D3 bioavailability was higher in emulsions stabilized with pea protein and guarana than in emulsions stabilized only with pea protein.

Another promising conclusion was that the processes deployed in the study to modify pea protein are easily scaled up for use in the food industry. Heat treatment for 30 minutes at 90 °C is similar to slow pasteurization of dairy products and fruit juices, and pH adjustment (common in the food industry) to control the interaction between the phenolic compounds in the guarana extract and the pea protein can be monitored to assure safety and obtain the appropriate flavor.

“This approach opens up novel possibilities for the development of plant-based emulsifiers with enhanced functional properties. The results suggest that other plant proteins can also benefit from the strategy so as to bolster their applications in the food industry,” Cunha said. “However, it’s important to note that optimization of plant protein modification processes depends on the composition of both the protein and the phenolic extract utilized.”

About FAPESP

The São Paulo Research Foundation (FAPESP) is a public institution with the mission of supporting scientific research in all fields of knowledge by awarding scholarships, fellowships and grants to investigators linked with higher education and research institutions in the state of São Paulo, Brazil. FAPESP is aware that the very best research can only be done by working with the best researchers internationally. Therefore, it has established partnerships with funding agencies, higher education, private companies, and research organizations in other countries known for the quality of their research and has been encouraging scientists funded by its grants to further develop their international collaboration.

 

Burned rice hulls could help batteries store more charge



New research finds hard carbon in rice hull ash, providing a cheap, domestic source of the material that can replace graphite in lithium-ion or sodium-ion battery anodes



University of Michigan

 


 

Images

A closer inspection of ash from burned rice hulls, the hard outer layer of rice grains, revealed a form of carbon that could nearly double the energy density of typical lithium-ion or sodium-ion batteries. 

 

This sustainable source of 'hard' carbon, which outperforms ordinary graphite in battery electrodes, was discovered at the University of Michigan.

 

This is the first demonstration of hard carbon made through combustion. It was previously thought hard carbon could only be made by heating biomass, such as agricultural waste, to about 1200°C (2200°F) in an oxygen-free environment like nitrogen or argon. 

 

Rather than importing graphite mined from China or Mexico, rice hull ash could provide a higher quality domestic material for making battery electrodes. The process is also more sustainable than producing graphite from biomass, which must be heated to 2000°C (3600°F) or higher—producing five to 10 tons of COfor every ton of battery-grade graphite.

 

Although most rice hulls end up in landfills, burning rice hulls provides a carbon neutral source of electricity. Wadham Energy LP in the Sacramento Valley of California generates 200,000 megawatt-hours of electricity per year by burning the agricultural byproduct—enough energy to power about 22,000 homes. 

 

"The CO2  released while burning rice hulls comes from the same COthe rice plant took up from the atmosphere during photosynthesis, making the electricity produced green and carbon neutral," said Richard Laine, U-M professor of materials science and engineering and macromolecular science and engineering and corresponding author of the study recently published in Advanced Sustainable Systems. 

 

With about 20 billion pounds of rice grown annually in the United States, there is plenty of room to scale up.

 

In prior work, the research team demonstrated methods to partially remove the silica in  rice hull ash which contains about 90% silica and 10% carbon. That silica can be used to produce high-purity silicon used in solar cells or semiconductors. Once the silica is partially removed from the rice hull ash through a process called depolymerization, the remaining ash is about 60%-70% carbon. 

 

The leftover carbon was thought to be shapeless and disorganized, a material called amorphous carbon, based on the patterns made by X-rays shone through the material. However, spectroscopy techniques specialized for molecular-level detail revealed tiny islands of graphite that exist on the nanoscale (for scale, one nanometer is one billionth of a meter) within the amorphous carbon matrix. This blend of amorphous carbon dotted with graphite is called hard carbon.

 

"Hard carbon can be produced by combustion in this case because as you burn away the carbon of rice hulls, you create a shell of silica around the remaining carbon and it bakes it like a pie," Laine said.

 

When testing the electrochemical properties of hard carbon obtained from rice hull ash, it outperformed both commercial hard carbon and graphite as the anode of a lithium-ion battery, the point where charge flows out of the battery.

 

A gram of commercial hard carbon accepts enough lithium to store about 500 milliampere-hours (mAh)—a unit of electrical charge often used to describe battery storage capacity. In contrast, a gram of graphite accepts about 370 mAh, meaning hard carbon batteries have about 50% higher energy density. Rice hull ash hard carbon exceeds both, with a storage capacity of more than 700 mAh—nearly double that of graphite. 

 

The nanoporous structure of the isolated hard carbon is thought to contribute to the increased lithium capacity.

 

Turning agricultural waste into a valuable product, rice hull ash hard carbon can help meet the growing demand for batteries for use in electric vehicles and storing intermittent renewable energy while decreasing both cost and emissions.

 

The team has applied for patent protection with the assistance of U-M Innovation Partnerships and is seeking partners to bring the technology to market.

 

Karlsruhe Institute of Technology in Germany also participated in this research through co-author Sylvio Indris. Wadham Energy supplied the rice hull ash used in the research.

 

The process was studied in part at the Michigan Center for Materials Characterization. The work at the University of Michigan was primarily funded by the National Science Foundation and the Mercedes-Benz Research & Development North America. 

 

Study: An unexpected source of hard carbon, rice hull ash, provides unexpected Li+ storage capacities (DOI: 10.1002/adsu.202400667)

 

KIER's breakthrough in solving waste plastic processing with heat circulation

Development of a continuous, large-Scale pyrolysis process for waste plastics overcoming limitations of conventional methods


National Research Council of Science & Technology


The pyrolysis oil produced using the developed process
view more
Credit: KOREA INSTITUTE OF ENERGY RESEARCH


Dr. Byungwook Hwang’s research team from the CCS Research Department at the Korea Institute of Energy Research (KIER) has successfully developed a process that applies the circulating fluidized bed technology, commonly used in coal-fired power plant boilers, to recycle waste plastics and produce pyrolysis oil on a large scale.

The COVID-19 pandemic has led to a sharp increase in household plastic waste worldwide. In response, countries around the globe are focusing on recycling technologies, such as pyrolysis, for eco-friendly waste plastic management. Recently, the Korean government announced plans to expand the annual volume of plastic waste processed via pyrolysis from 10,000 tons to 900,000 tons by 2030.
*Pyrolysis Recycling: A process in which mixed waste plastics are heated in a high-temperature environment, breaking them down into gas, liquid (pyrolysis oil), and solid forms. The resulting liquid (pyrolysis oil) is then recycled as a raw material for producing new plastics, various high-value-added chemicals (such as BTX), and fuels.
**Leading the Circular Economy and Carbon Neutrality Through Waste Plastic Pyrolysis" (Ministry of Environment Press Release, June 21, 2021)

Currently, the kiln method is used in the Republic of Korea for the pyrolysis of waste plastics. This process involves placing waste plastics inside a cylindrical chamber, applying heat externally, and condensing the resulting vapor to produce pyrolysis oil. While the process design is relatively simple, it faces scalability limitations, as heat transfer from the exterior to the center of the cylinder becomes increasingly difficult as the size of the chamber increases.

The kiln method can process only up to 20 tons of plastic per day, which falls far short of the 900,000 tons per year target set by the government for pyrolysis processing. Additionally, the kiln method requires continuous external heat supply and cannot operate continuously, as the process must be paused to handle residual waste before restarting. These limitations make it inefficient for large-scale applications.

The research team developed a technology to recycle waste plastics using a circulating fluidized bed (CFB) process, overcoming the limitations of conventional methods. The CFB process is a technology in which heat carriers, such as high-temperature sand, circulate to enable continuous heat transfer during reactions. For the first time globally, the team successfully applied the CFB process to the pyrolysis of waste plastics, enabling both continuous operation and scalability—key challenges of existing processes.

The core of the developed process lies in heat circulation. In this system, catalyst particles heated in the combustion reactor are circulated to the pyrolysis reactor, where they transfer heat to facilitate the pyrolysis of waste plastics. After transferring heat, the catalyst, now at a lower temperature, returns to the combustion reactor along with the residual waste. The residual waste is incinerated, generating heat to reheat the catalyst. The reheated catalyst is then recirculated back to the pyrolysis reactor, maintaining a continuous process of heat transfer and pyrolysis.

By utilizing this process, a continuous operation is achievable as the cycle of raw material input, heat supply, and residual waste treatment is seamlessly maintained. Additionally, since the catalyst moves freely within the reactor, heat can be effectively transferred from the center to the edges of the reactor, enabling scalability and the development of larger systems.

The research team conducted pyrolysis experiments on waste plastics using their process, handling up to 100 kilograms per day. They confirmed that the process can pyrolyze not only plastics but also solid recovered fuel (SRF)* made from household waste. When SRF was processed, the yield was approximately 37%, which is 1.2 times higher than conventional methods. Additionally, the produced pyrolysis oil showed a significant improvement in quality, with a 45% content of light fractions**, nearly doubling the quality compared to existing processes.

*Solid Recovered Fuel (SRF) : a type of manufactured fuel derived from Municipal Solid Waste (MSW) such as synthetic resins, rubber, and wood, is designed for use in power plants and other facilities. Improper disposal of SRF can result in environmental pollution, emphasizing the importance of eco-friendly recycling methods.

**Light Fraction Content: The proportion of light hydrocarbons (C5-C12) in pyrolysis oil, which are crucial components for high-quality fuel and chemical production. This serves as a key indicator of pyrolysis efficiency and product quality, with a higher content signifying superior pyrolysis oil suitable for diverse industrial applications.

Dr. Byungwook Hwang, the lead researcher, stated, “The most significant achievement of this study is the design and development of a technology capable of continuously processing waste, including plastic waste, through pyrolysis.” He added, “This core pyrolysis technology is highly suitable for achieving Korea’s waste plastic pyrolysis targets, as it enables the processing of large volumes of waste plastics while producing high-quality pyrolysis oil.”

Meanwhile, this research was conducted as part of the Korea Institute of Energy Research's R&D project. The results have been published in the globally renowned journal in the field of chemical engineering, Chemical Engineering Journal (Impact Factor: 13.3).

Journal

Chemical Engineering Journal

DOI

10.1016/j.cej.2024.156257

Article Title

Development of a circulating fluidized bed for a 100 kg/day waste plastic pyrolysis-combustion system

 

So you want to build a solar or wind farm? Here’s how to decide where


MIT engineers show how detailed mapping of weather conditions and energy demand can guide optimization for siting renewable energy installations

"Decarbonized Energy System Planning with High-Resolution Spatial Representation of Renewables Lowers Cost"


Massachusetts Institute of Technology




Deciding where to build new solar or wind installations is often left up to individual developers or utilities, with limited overall coordination. But a new study shows that regional-level planning using fine-grained weather data, information about energy use, and energy system modeling can make a big difference in the design of such renewable power installations. This also leads to more efficient and economically viable operations.

The findings show the benefits of coordinating the siting of solar farms, wind farms, and storage systems, taking into account local and temporal variations in wind, sunlight, and energy demand to maximize the utilization of renewable resources. This approach can reduce the need for sizable investments in storage, and thus the total system cost, while maximizing availability of clean power when it’s needed, the researchers found.

The study, which will appear in the journal Cell Reports Sustainability, was co-authored by Liying Qiu and Rahman Khorramfar, postdocs in MIT’s Department of Civil and Environmental Engineering, and professors Saurabh Amin and Michael Howland. 

Qiu, the lead author, says that with the team’s new approach, “we can harness the resource complementarity, which means that renewable resources of different types, such as wind and solar, or different locations can compensate for each other in time and space. This potential for spatial complementarity to improve system design has not been emphasized and quantified in existing large-scale planning.”

Such complementarity will become ever more important as variable renewable energy sources account for a greater proportion of power entering the grid, she says. By coordinating the peaks and valleys of production and demand more smoothly, she says, “we are actually trying to use the natural variability itself to address the variability.”

Typically, in planning large-scale renewable energy installations, Qiu says, “some work on a country level, for example saying that 30 percent of energy should be wind and 20 percent solar. That’s very general.” For this study, the team looked at both weather data and energy system planning modeling on a scale of less than 10-kilometer (about 6-mile) resolution. “It’s a way of determining where should we exactly build each renewable energy plant, rather than just saying this city should have this many wind or solar farms,” she explains.

To compile their data and enable high-resolution planning, the researchers relied on a variety of sources that had not previously been integrated. They used high-resolution meteorological data from the National Renewable Energy Laboratory, which is publicly available at 2-kilometer resolution but rarely used in a planning model at such a fine scale. These data were combined with an energy system model they developed to optimize siting at a sub-10-kilometer resolution. To get a sense of how the fine-scale data and model made a difference in different regions, they focused on three U.S. regions — New England, Texas, and California — analyzing up to 138,271 possible siting locations simultaneously for a single region.

By comparing the results of siting based on a typical method vs. their high-resolution approach, the team showed that “resource complementarity really helps us reduce the system cost by aligning renewable power generation with demand,” which should translate directly to real-world decision-making, Qiu says. “If an individual developer wants to build a wind or solar farm and just goes to where there is the most wind or solar resource on average, it may not necessarily guarantee the best fit into a decarbonized energy system.”

That’s because of the complex interactions between production and demand for electricity, as both vary hour by hour, and month by month as seasons change. “What we are trying to do is minimize the difference between the energy supply and demand rather than simply supplying as much renewable energy as possible,” Qiu says. “Sometimes your generation cannot be utilized by the system, while at other times, you don’t have enough to match the demand.”

In New England, for example, the new analysis shows there should be more wind farms in locations where there is a strong wind resource during the night, when solar energy is unavailable. Some locations tend to be windier at night, while others tend to have more wind during the day. 

These insights were revealed through the integration of high-resolution weather data and energy system optimization used by the researchers. When planning with lower resolution weather data, which was generated at a 30-kilometer resolutionglobally and is more commonly used in energy system planning, there was much less complementarity among renewable power plants. Consequently, the total system cost was much higher. The complementarity between wind and solar farms was enhanced by the high-resolution modeling due to improved representation of renewable resource variability.

The researchers say their framework is very flexible and can be easily adapted to any region to account for the localgeophysical and other conditions. In Texas, for example, peak winds in the west occur in the morning, while along the south coast they occur in the afternoon, so the two naturally complement each other.

Khorramfar says that this work “highlights the importance of data-driven decision making in energy planning.” The work shows that using such high-resolution data coupled with carefully formulated energy planning model “can drive the system cost down, and ultimately offer more cost-effective pathways for energy transition.”

One thing that was surprising about the findings, says Amin, who is a principal investigator in the Laboratory of Information and Data Systems, is how significant the gains were from analyzing relatively short-term variations in inputs and outputs that take place in a 24-hour period. “The kind of cost-saving potential by trying to harness complemetarity within a day was not something that one would have expected before this study,” he says. 

In addition, Amin says, it was also surprising how much this kind of modeling could reduce the need for storage as part of these energy systems. “This study shows that there is actually a hidden cost-saving potential in exploiting local patterns in weather, that can result in a monetary reduction in storage cost.”

The system-level analysis and planning suggested by this study, Howland says, “changes how we think about where we site renewable power plants and how we design those renewable plants, so that they maximally serve the energy grid. It has to go beyond just driving down the cost of energy of individual wind or solar farms. And these new insights can only be realized if we continue collaborating across traditional research boundaries, by integrating expertise in fluid dynamics, atmospheric science, and energy engineering.”

The research was supported by the MIT Climate and Sustainability Consortium and MIT Climate Grand Challenges.

###

Written by David L. Chandler, MIT News Office

Impact studies should include high-sensitivity climate models



University of Reading





High-sensitivity climate models should not be excluded when predicting future regional climate impacts because the level of warming measured globally is not always the only good indicator of regional changes, a new study suggests. 

Some models which scientists use to predict future changes in Earth's climate show faster global warming than others, leading to temperature projections that are considered unlikely. Some experts suggest that these more sensitive (or ‘hotter’) models should be omitted when studying future climate impacts.  

New research published today (Thursday, 5 December) in Earth’s Future shows no clear correlation between the rate of warming and some important regional drivers. Instead, how the behaviour of regional weather patterns control impacts needs to be considered too. 

Dr Ranjini Swaminathan, lead author at the University of Reading and National Centre for Earth Observation, said: "We should not exclude climate models from impact assessments based on their climate sensitivity as this could lead to ignoring future outcomes that are potentially serious and realistic.

“What happens globally doesn't always match what happens locally and we show that no universal correlation exists between climate sensitivity and regional climate drivers. For example, we see a general increase in the number of drought events in the future, but we don’t see a statistically significant correlation between the change in the number of drought events and climate sensitivity. This is because the magnitude of global warming is just one of many factors influencing drought and is often not the most important. 

“Our results contradict suggestions that models showing higher warming should be excluded from studies about future climate impacts.”  

Preparing communities 

The researchers studied how different models predict three major climate impacts: heavy rains that cause flooding, droughts that affect farming and water supplies, and conditions that increase the risk of wildfire. They looked at these across different parts of the world, including the Amazon rainforest, Australia, East Asia, and parts of Africa and India. 

They discovered that how much global warming a model predicts isn't the main factor in determining local impacts - regional factors matter too. If models are selected based only upon their prediction of global warming, important and physically plausible outcomes of regional climate impacts could be missed. This could lead to an inaccurate portrayal of the risks that need to be considered by governments and communities as they adapt to climate change. 

 

Breakthrough in electric vehicle technology: Advanced SOC estimation using random forest


Beijing Institute of Technology Press Co., Ltd
State of charge estimation for electric vehicles using random forest 

image: 

State of charge estimation for electric vehicles using random forest

view more 

Credit: GREEN ENERGY AND INTELLIGENT TRANSPORTATION





 

In a significant leap forward for the electric vehicle (EV) industry, the accurate estimation of SOC is critical for optimizing battery usage, predicting range, and ensuring the longevity of EV batteries. Traditional methods have struggled to capture the complex, nonlinear behavior of batteries under dynamic driving conditions. This study, however, introduces the RF algorithm, which excels in real-world scenarios by leveraging decision trees and ensemble learning to form robust relationships between various input parameters,such as voltage, current, ambient temperature, battery temperature and SOC values. This innovative method promises to enhance the efficiency and reliability of EVs, addressing one of the industry's most pressing challenges.

 

The RF model significantly outperforms previous methods, including the Extreme Learning Machine (ELM), demonstrating superior accuracy and robustness. Comprehensive comparative analyses reveal that the RF model achieves a lower Root Mean Squared Error (RMSE) of 5.9028% compared to 6.3127% for ELM, and a lower Mean Absolute Error (MAE) of 4.4321% versus 5.1112% for ELM across rigorous k-fold cross-validation testing. This enhanced precision underscores the potential of RF in advancing electric mobility

 

Utilizing real-world data from 70 trips of a BMW i3 EV, the study highlights the practical application of the RF model. The integration of this SOC estimation approach into the battery management system of vehicles like the BMW i3 holds the key to more efficient and dependable EV operations. The RF model's ability to handle large datasets, robustness to noise, and feature importance analysis makes it a promising solution for the EV industry

 

The research opens new avenues for further exploration, including expanding the scope of input parameters, exploring diverse input-output configurations tailored to specific driving conditions, and incorporating feature selection techniques. These endeavors promise to further enhance the accuracy and applicability of the RF model in real-world EV applications

 

This groundbreaking study not only offers a glimpse into the future of electric mobility but also sets a new standard for SOC estimation in EVs. By harnessing the power of machine learning and advanced algorithms, the RF model promises to revolutionize battery management, improve EV range prediction accuracy, and contribute to the sustainability and efficiency of electric vehicles. As the study progresses, further research and real-world applications will likely provide additional insights and refinements, leading to even more robust and adaptable SOC estimation systems

 

Authors: Mohd Herwan Sulaiman,  Zuriani Mustaffa

Article link: https://www.sciencedirect.com/science/article/pii/S277315372400029X

 

Researchers developing tool to instantly conceal and anonymize voices


The voice-changer system will produce computer-generated speech within milliseconds, allowing users to control factors like age, gender, and dialect.



University of Rochester

Voice Changer 

image: 

Real-time voice-changer technology could help intelligence officers, witnesses to crimes, and whistleblowers protect their identities.

view more 

Credit: University of Rochester illustration / AJ Pow




Researchers are developing a new system that will allow people to speak anonymously in real time through computer-generated voices to help protect privacy and avoid censorship or retaliation. The technology is intended to help people such as intelligence officers carrying out sensitive missions, crime witnesses concerned about being identified by perpetrators, and whistleblowers who fear retaliation.

The three-year project, led by Honeywell and including collaborators from the University of Rochester, Texas A&M, and the University of Texas at Dallas, is funded by the Intelligence Advanced Research Projects Activity (IARPA) and part of the Anonymous Real-Time Speech (ARTS) program.

The voice-changer project has three main objectives. First, the system will transform what a user says into a digital voice within a few milliseconds, ensuring that it can be used in real-time conversations. Second, the team aims to allow users to specify what they call static traits, allowing control over the digital voice’s age, gender, and dialect. Lastly, they want to neutralize what they call dynamic traits, such as emotions or health status that could potentially tip the identity of the user.

“In the end, a 30-year-old woman from Texas will be able to instantaneously transform her voice to be output by the virtual speaker to sound like a 50-year-old man with a British accent, for example, without producing artifacts that can be traced back to the identity of the user,” says Zhiyao Duan, an associate professor of electrical and computer engineering and Rochester’s lead on the project. “And in addition to the latency requirements, we’ll also be working to ensure the intelligibility and naturalness of the computer-generated voice.”

Duan says that while the roles on the project are fluid, his team at Rochester will initially focus on generating the virtual speakers and controls for the static traits, building on their experience in speaker modeling, disentangled speech representations, and voice synthesis. The team will first develop the technology to work in English. If successful, they plan to expand it to other languages such as Spanish, Mandarin, and Korean.

The team hopes these open-source voice-changer tools will have positive benefits far beyond the intended initial use cases. Still, the researchers recognize that people may have concerns about such powerful software.

“I think it’s natural for people to wonder what will happen if these tools get in the hands of bad actors,” says Duan. “It’s important to note that my lab and others around the globe are also working to develop deepfake detection tools so that people can discern whether something is said by an actual person or generated through algorithms. Those tools will be equally important to have.”