Monday, November 27, 2023

 

Novel AI system could diagnose autism much earlier


Meeting Announcement

RADIOLOGICAL SOCIETY OF NORTH AMERICA

The top five white matter features (region pairs) in a single image. 

IMAGE: 

THE TOP FIVE WHITE MATTER FEATURES (REGION PAIRS) IN A SINGLE IMAGE. THE COLOR MAP IS: YELLOW = SUPERIOR CEREBELLAR PEDUNCLE (R)/UNCINATE FASCICULUS (R), ORANGE = COLUMN AND BODY OF FORNIX/POSTERIOR CORONA RADIATA (L), PURPLE = SPLENIUM/RETROLENTICULAR INTERNAL CAPSULE (L), BLUE = DORSAL CINGULUM (L)/CRES OF FORNIX (R), GREEN = SPLENIUM/EXTERNAL CAPSULE (R)

view more 

CREDIT: RSNA/MOHAMED KHUDRI, B.SC.




CHICAGO – A newly developed artificial intelligence (AI) system that analyzes specialized MRIs of the brain accurately diagnosed children between the ages of 24 and 48 months with autism at a 98.5% accuracy rate, according to research being presented next week at the annual meeting of the Radiological Society of North America (RSNA).

Mohamed Khudri, B.Sc., a visiting research scholar at the University of Louisville in Kentucky, was part of a multi-disciplinary team that developed the three-stage system to analyze and classify diffusion tensor MRI (DT-MRI) of the brain. DT-MRI is a special technique that detects how water travels along white matter tracts in the brain.

“Our algorithm is trained to identify areas of deviation to diagnose whether someone is autistic or neurotypical,” Khudri said.

The AI system involves isolating brain tissue images from the DT-MRI scans and extracting imaging markers that indicate the level of connectivity between brain regions. A machine learning algorithm compares the marker patterns in the brains of children with autism to those of the normally developed brains.

“Autism is primarily a disease of improper connections within the brain,” said co-author Gregory N. Barnes, M.D., Ph.D., professor of neurology and director of the Norton Children’s Autism Center in Louisville. “DT-MRI captures these abnormal connections that lead to the symptoms that children with autism often have, such as impaired social communication and repetitive behaviors.”

The researchers applied their methodology to the DT-MRI brain scans of 226 children between the ages of 24 and 48 months from the Autism Brain Imaging Data Exchange-II. The dataset included scans of 126 children affected by autism and 100 normally developing children. The technology demonstrated 97% sensitivity, 98% specificity, and an overall accuracy of 98.5% in identifying the children with autism.

“Our approach is a novel advancement that enables the early detection of autism in infants under two years of age,” Khudri said. “We believe that therapeutic intervention before the age of three can lead to better outcomes, including the potential for individuals with autism to achieve greater independence and higher IQs.”

According to the CDC’s 2023 Community Report on Autism, fewer than half of children with autism spectrum disorder received a developmental evaluation by three years of age, and 30% of children who met the criteria for autism spectrum disorder did not receive a formal diagnosis by 8 years of age.

“The idea behind early intervention is to take advantage of brain plasticity, or the ability of the brain to normalize function with therapy,” Dr. Barnes said.

The researchers said infants and young children with autism receive a delayed diagnosis for several reasons, including a lack of bandwidth at testing centers. Khudri said their AI system could facilitate precise autism management while reducing the time and costs associated with assessment and treatment.

“Imaging offers the promise of quickly detecting autism in an objective fashion,” Dr. Barnes said. “We envision an autism assessment that begins with DT-MRI followed by an abbreviated session with a psychologist to confirm the results and guide parents on next steps. This approach could reduce the psychologists’ workload by up to 30%.”

The AI system produces a report detailing which neural pathways are affected, the anticipated impact on brain functionality, and a severity grade that can be used to guide early therapeutic intervention.

The researchers are working toward commercializing and obtaining FDA clearance for their AI software.

Additional co-authors are Mostafa Abdelrahim, B.Sc., Yaser El-Nakieb, Ph.D., Mohamed Ali, Ph.D., Ahmed S. Shalaby, Ph.D., A. Gebreil, M.D., Ali Mahmoud, Ph.D., Ahmed Elnakib, Ph.D., Andrew Switala, Sohail Contractor, M.D., and Ayman S. El-Baz, Ph.D.

###

Note: Copies of RSNA 2023 news releases and electronic images will be available online at RSNA.org/press23.

RSNA is an association of radiologists, radiation oncologists, medical physicists and related scientists promoting excellence in patient care and health care delivery through education, research and technologic innovation. The Society is based in Oak Brook, Illinois. (RSNA.org)

Editor’s note: The data in these releases may differ from those in the published abstract and those actually presented at the meeting, as researchers continue to update their data right up until the meeting. To ensure you are using the most up-to-date information, please call the RSNA Newsroom at 1-312-791-6610.

For patient-friendly information on brain MRI, visit RadiologyInfo.org.

AI identifies non-smokers at high risk for lung cancer

Meeting Announcement

RADIOLOGICAL SOCIETY OF NORTH AMERICA

Frontal Chest X-ray 

IMAGE: 

FRONTAL CHEST X-RAY SHOWS A SMALL NODULAR OPACITY (ARROW) IN THE LEFT UPPER LUNG ZONE. AXIAL, NON-CONTRAST, LOW-DOSE CHEST CT SCAN SHOWS A 9-MM SOLID NODULE (ARROW) IN THE LEFT UPPER LOBE.

view more 

CREDIT: RSNA/RADIOLOGY

CHICAGO – Using a routine chest X-ray image, an artificial intelligence (AI) tool can identify non-smokers who are at high risk for lung cancer, according to a study being presented next week at the annual meeting of the Radiological Society of North America (RSNA).

Lung cancer is the most common cause of cancer death. The American Cancer Society estimates about 238,340 new cases of lung cancer in the United States this year and 127,070 lung cancer deaths. Approximately 10-20% of lung cancers occur in “never-smokers” – people who have never smoked cigarettes or smoked fewer than 100 cigarettes in their lifetime.

The United States Preventive Services Task Force (USPSTF) currently recommends lung cancer screening with low-dose CT for adults between the ages of 50 and 80 who have at least a 20 pack-year smoking history and currently smoke or have quit within the past 15 years. The USPSTF does not recommend screening for individuals who have never smoked or who have smoked very little. However, incidence of lung cancer among never-smokers is on the rise, and—without early detection through screening—when discovered, these cancers tend to be more advanced than those found in smokers.

“Current Medicare and USPSTF guidelines recommend lung cancer screening CT only for individuals with a substantial smoking history,” said the study’s lead author, Anika S. Walia, B.A., a medical student at Boston University School of Medicine and researcher at the Cardiovascular Imaging Research Center (CIRC) at Massachusetts General Hospital (MGH) and Harvard Medical School in Boston. “However, lung cancer is increasingly common in never-smokers and often presents at an advanced stage.”

One reason federal guidelines exclude never-smokers from screening recommendations is because it is difficult to predict lung cancer risk in this population. Existing lung cancer risk scores require information that is not readily available for most individuals, such as family history of lung cancer, pulmonary function testing or serum biomarkers.

For the study, CIRC researchers set out to improve lung cancer risk prediction in never-smokers by testing whether a deep learning model could identify never-smokers at high risk for lung cancer, based on their chest X-rays from the electronic medical record. Deep learning is an advanced type of AI that can be trained to search X-ray images to find patterns associated with disease.

“A major advantage to our approach is that it only requires a single chest-X-ray image, which is one of the most common tests in medicine and widely available in the electronic medical record,” Walia said. 

The “CXR-Lung-Risk” model was developed using 147,497 chest X-rays of 40,643 asymptomatic smokers and never-smokers from the Prostate, Lung, Colorectal, and Ovarian (PLCO) cancer screening trial to predict lung-related mortality risk, based on a single chest X-ray image as input.

The researchers externally validated the model in a separate group of never-smokers having routine outpatient chest X-rays from 2013 to 2014. The primary outcome was six-year incident lung cancer, identified using International Classification of Disease codes. Risk scores were then converted to low, moderate and high-risk groups based on externally derived risk thresholds.

Of 17,407 patients (mean age 63 years) included in the study, 28% were deemed high risk by the deep learning model, and 2.9% of these patients later had a diagnosis of lung cancer. The high-risk group well exceeded the 1.3% six-year risk threshold where lung cancer screening CT is recommended by National Comprehensive Cancer Network guidelines.

After adjusting for age, sex, race, previous lower respiratory tract infection and prevalent chronic obstructive pulmonary disease, there was still a 2.1 times greater risk of developing lung cancer in the high-risk group compared to the low-risk group.

“This AI tool opens the door for opportunistic screening for never-smokers at high risk of lung cancer, using existing chest X-rays in the electronic medical record,” said senior author Michael T. Lu, M.D., M.P.H., director of artificial intelligence and co-director of CIRC at MGH. “Since cigarette smoking rates are declining, approaches to detect lung cancer early in those who do not smoke are going to be increasingly important.”   

Additional co-authors are Saman Doroodgar Jorshery, M.D., Ph.D., and Vineet K. Raghu, Ph.D.

The researchers report support from the Boston University School of Medicine Student Committee on Medical School Affairs, National Academy of Medicine/Johnson & Johnson Innovation Quickfire Challenge, and the Risk Management Corporation of the Harvard Medical Institutions Incorporated.

###

Note: Copies of RSNA 2023 news releases and electronic images will be available online at RSNA.org/press23.

RSNA is an association of radiologists, radiation oncologists, medical physicists and related scientists promoting excellence in patient care and health care delivery through education, research and technologic innovation. The Society is based in Oak Brook, Illinois. (RSNA.org)

Editor’s note: The data in these releases may differ from those in the published abstract and those actually presented at the meeting, as researchers continue to update their data right up until the meeting. To ensure you are using the most up-to-date information, please call the RSNA Newsroom at 1-312-791-6610.

For patient-friendly information on lung cancer screening, visit RadiologyInfo.org.

 

New method for determining the water content of water-soluble compounds


Peer-Reviewed Publication

UNIVERSITY OF EASTERN FINLAND




Researchers at the University of Eastern Finland School of Pharmacy have developed a new method for the accurate determination of the water content of water-soluble compounds. This plays a significant role in, e.g., drug dosage. The method utilises solution-state nuclear magnetic resonance spectroscopy, i.e., NMR spectroscopy.

In pharmaceutical research and development, it is very important to know the exact structure and water content of the compound being studied, as they affect both the physicochemical and pharmaceutical properties of the compound. Additionally, the water content affects the total molecular weight of the compound that is needed for the calculation of the correct drug dosage. There are several methods for determining the water content of chemical compounds, of which titration and thermogravimetry (TGA) are the most common ones. However, most methods require accurate weighing, destroy the sample, require special expertise or are time-consuming.

The NMR method developed in the study is simple and accurate and works very well for determining the water content of water-soluble compounds, as the NMR results were comparable with the water contents obtained by TGA and X-ray crystallography determinations.

“The research also revealed that the previously determined water content may change during storage. For example, the commercial sodium salt of citric acid had changed from a form containing 5.5 crystal water molecules to one containing 2 crystal water molecules,” Senior Researcher Tuulia Tykkynen and Senior Researcher Petri Turhanen of the University of Eastern Finland point out.

The advantages of the NMR method are easy sample handling (no accurate weighing required), speed (the measurement of one sample and the calculation of the result takes ca. 15–20 minutes) and the possibility to recover the investigated compound after the measurement, as the method does not destroy the sample. The method is also sufficiently precise and repeatable. An NMR spectrometer is a very expensive investment, but it can be stated that the equipment in question is almost always found in laboratories where new compounds and pharmaceuticals are synthesized, as it is an essential tool for structure determinations.

The study was published in the prestigious Analytical Chemistry journal published by the American Chemical Society. The study was conducted in collaboration with the Department of Technical Physics of the University of Eastern Finland (TGA measurements) and the Department of Chemistry of the University of Crete (crystal structures).

 

Our brains are not able to ‘rewire’ themselves, despite what most scientists believe, new study argues


Peer-Reviewed Publication

UNIVERSITY OF CAMBRIDGE



Contrary to the commonly-held view, the brain does not have the ability to rewire itself to compensate for the loss of sight, an amputation or stroke, for example, say scientists from the University of Cambridge and Johns Hopkins University.

Writing in eLife, Professors Tamar Makin (Cambridge) and John Krakauer (Johns Hopkins) argue that the notion that the brain, in response to injury or deficit, can reorganise itself and repurpose particular regions for new functions, is fundamentally flawed – despite being commonly cited in scientific textbooks. Instead, they argue that what is occurring is merely the brain being trained to utilise already existing, but latent, abilities.

One of the most common examples given is where a person loses their sight – or is born blind – and the visual cortex, previously specialised in processing vision, is rewired to process sounds, allowing the individual to use a form of ‘echolocation’ to navigate a cluttered room. Another common example is of people who have had a stroke and are initially unable to move their limbs repurposing other areas of the brain to allow them to regain control.

Krakauer, Director of the Center for the Study of Motor Learning and Brain Repair at Johns Hopkins University, said: “The idea that our brain has an amazing ability to rewire and reorganise itself is an appealing one. It gives us hope and fascination, especially when we hear extraordinary stories of blind individuals developing almost superhuman echolocation abilities, for example, or stroke survivors miraculously regaining motor abilities they thought they’d lost.

“This idea goes beyond simple adaptation, or plasticity – it implies a wholesale repurposing of brain regions. But while these stories may well be true, the explanation of what is happening is, in fact, wrong.”

In their article, Makin and Krakauer look at a ten seminal studies that purport to show the brain’s ability to reorganise. They argue, however, that while the studies do indeed show the brain’s ability to adapt to change, it is not creating new functions in previously unrelated areas – instead it's utilising latent capacities that have been present since birth.

For example, one of the studies – research carried out in the 1980s by Professor Michael Merzenich at University of California, San Francisco – looked at what happens when a hand loses a finger. The hand has a particular representation in the brain, with each finger appearing to map onto a specific brain region. Remove the forefinger, and the area of the brain previously allocated to this finger is reallocated to processing signals from neighbouring fingers, argued Merzenich – in other words, the brain has rewired itself in response to changes in sensory input.

Not so, says Makin, whose own research provides an alternative explanation.

In a study published in 2022, Makin used a nerve blocker to temporarily mimic the effect of amputation of the forefinger in her subjects. She showed that even before amputation, signals from neighbouring fingers mapped onto the brain region ‘responsible’ for the forefinger – in other words, while this brain region may have been primarily responsible for process signals from the forefinger, it was not exclusively so. All that happens following amputation is that existing signals from the other fingers are ‘dialled up’ in this brain region.

Makin, from the Medical Research Council (MRC) Cognition and Brain Sciences Unit at the University of Cambridge, said: “The brain's ability to adapt to injury isn’t about commandeering new brain regions for entirely different purposes. These regions don’t start processing entirely new types of information. Information about the other fingers was available in the examined brain area even before the amputation, it’s just that in the original studies, the researchers didn’t pay much notice to it because it was weaker than for the finger about to be amputated.”

Another compelling counterexample to the reorganisation argument is seen in a study of congenitally deaf cats, whose auditory cortex – the area of the brain that processes sound – appears to be repurposed to process vision. But when they are fitted with a cochlear implant, this brain region immediately begins processing sound once again, suggesting that the brain had not, in fact, rewired.

Examining other studies, Makin and Krakauer found no compelling evidence that the visual cortex of individuals that were born blind or the uninjured cortex of stroke survivors ever developed a novel functional ability that did not otherwise exist. 

Makin and Krakauer do not dismiss the stories of blind people being able to navigate purely based on hearing, or individuals who have experienced a stroke regain their motor functions, for example. They argue instead that rather than completely repurposing regions for new tasks, the brain is enhancing or modifying its pre-existing architecture – and it is doing this through repetition and learning.

Understanding the true nature and limits of brain plasticity is crucial, both for setting realistic expectations for patients and for guiding clinical practitioners in their rehabilitative approaches, they argue.

Makin added: “This learning process is a testament to the brain's remarkable – but constrained –capacity for plasticity. There are no shortcuts or fast tracks in this journey. The idea of quickly unlocking hidden brain potentials or tapping into vast unused reserves is more wishful thinking than reality. It's a slow, incremental journey, demanding persistent effort and practice. Recognising this helps us appreciate the hard work behind every story of recovery and adapt our strategies accordingly.

“So many times, the brain’s ability to rewire has been described as ‘miraculous’ – but we’re scientists, we don’t believe in magic. These amazing behaviours that we see are rooted in hard work, repetition and training, not the magical reassignment of the brain’s resources.”

Reference
Makin, TR & Krakauer, JW. Against Cortical Reorganisation. eLife; 21 Nov 2023; DOI: doi.org/10.7554/eLife.84716

 

Proof of concept of new material for long lasting relief from dry mouth conditions

Peer-Reviewed Publication

UNIVERSITY OF LEEDS

Synthetic saliva using a dairy protein 

IMAGE: 

LACTOFERRIN WHICH IS A PROTEIN FOUND IN MILK - COLOURED DARK BLUE - FORMS THE MESH-LIKE ARCHITECTURE OF THE HYDRATED MICROGEL, PARTIALLY COATED BY A HYDROGEL MADE BY A POLYSACCHARIDE Îš-CARRAGEENAN, COLOURED LIGHT BLUE. 

view more 

CREDIT: PLEASE CREDIT: DR ANNA TANCZOS, WWW.SCICOMMSTUDIOS.CO.UK

Proof of concept of new material for long lasting relief from dry mouth conditions  

A novel aqueous lubricant technology designed to help people who suffer from a dry mouth is between four and five times more effective than existing commercially available products, according to laboratory tests. 

Developed by scientists at the University of Leeds, the saliva substitute is described as comparable to natural saliva in the way it hydrates the mouth and acts as a lubricant when food is chewed.  

Under a powerful microscope, the molecules in the substance - known as a microgel - appear as a lattice-like network or sponge which bind onto the surface of the mouth. Surrounding the microgel is a polysaccharide-based hydrogel which traps water. This dual function will keep the mouth feeling hydrated for longer. 

Professor Anwesha Sarkar, who has led the development of the saliva substitute, said: “Our laboratory benchmarking reveals that this substance will have a longer-lasting effect.  

“The problem with many of the existing commercial products is they are only effective for short periods because they do not bind to the surface of the mouth, with people having to frequently reapply the substance, sometimes while they are talking or as they eat. 

“That affects people’s quality of life.” 

Results from the laboratory evaluation - “Benchmarking of a microgel-reinforced hydrogel-based aqueous lubricant against commercial saliva substitutes” - are reported today (Monday, November 20) in the journal Scientific Reports. 

The performance of the newly developed substance in comparison to existing products is due to a process called adsorption. Adsorption is the ability of a molecule to bind to something, in this case the surface of the inside of the mouth. 

Benchmark results 

The novel microgel comes in two forms: one made with a dairy protein and the other a vegan version using a potato protein.  

The new substance was benchmarked against eight commercially available saliva substitutes including Boots own brand product - Biotene; Oralieve; Saliveze; and Glandosane. All the benchmarking was done in a laboratory on an artificial tongue-like surface and did not involve human subjects. 

The testing revealed the Leeds product had a lower level of desorption - the opposite of adsorption - which is how much lubricant was lost from the surface of the synthetic tongue.  

With the commercially available products, between 23% to 58% percent of the lubricant was lost. With the saliva substitute developed at Leeds, the figure was just 7%. The dairy version slightly outperformed the vegan version. 

Dr Olivia Pabois, a Research Fellow at Leeds and first author in the paper, said: “The test results provide a robust proof of concept that that our material is likely to be more effective under real-world conditions and could offer relief up to five times longer than the existing products. 

“The results of the benchmarking show favourable results in three key area. Our microgel provides high moisturisation, it binds strongly with the surfaces of the mouth and is an effective lubricant, making it more comfortable for people to eat and talk.” 

The substances used in the production of the saliva substitute - diary and plant proteins and carbohydrates - are non-toxic to humans and non-caloric. 

Although testing of the new product has involved just laboratory analysis, the scientific team believe the results will be replicated in human trials. 

The authors of the study are looking to translate the lubricant technology into commercially available products, to improve the quality of life of people who experience debilitating dry mouth conditions.  

Xerostomia – healthcare burden 

A dry mouth or xerostomia, to give it its medical name, is a common condition which affects around one in ten of the population, and is prevalent among older people and people who have had cancer treatment or need to take a mix of medicines. 

In severe cases, a dry mouth results in people having discomfort swallowing and leads to malnutrition and dental problems, all of which increase the burden on healthcare systems. 

The paper - “Benchmarking of a microgel-reinforced hydrogel-based aqueous lubricant against commercial saliva substitutes” - can be downloaded from the Scientific Reports website when the embargo lifts - https://doi.org/10.1038/s41598-023-46108-w. The authors are Olivia Pabois, Alejandro Avila-Sierra, Marco Ramaioli, Mingduo Mu, Yasmin Message, Kwan-Mo You, Evangelos Liamas, Ben Kew, Kalpana Durga, Lisa Doherty and Anwesha Sarkar.  

END 

A potato protein - coloured dark green - forms the mesh-like architecture of the hydrated microgel, partially coated by a hydrogel made by a polysaccharide xanthan gum, shown in light green. 

CREDIT

Dr Anna Tanczos, www.SciCommStudios.co.uk

 

Dwarf galaxies use 10-million-year quiet period to churn out stars


Peer-Reviewed Publication

UNIVERSITY OF MICHIGAN






Images

ANN ARBOR—If you look at massive galaxies teeming with stars, you might be forgiven in thinking they are star factories, churning out brilliant balls of gas. But actually, less evolved dwarf galaxies have bigger regions of star factories, with higher rates of star formation.

Now, University of Michigan researchers have discovered the reason underlying this: These galaxies enjoy a 10-million-year delay in blowing out the gas cluttering up their environments. Star-forming regions are able to hang on to their gas and dust, allowing more stars to coalesce and evolve.

In these relatively pristine dwarf galaxies, massive stars—stars about 20 to 200 times the mass of our sun—collapse into black holes instead of exploding as supernovae. But in more evolved, polluted galaxies, like our Milky Way, they are more likely to explode, thereby generating a collective superwind. Gas and dust get blasted out of the galaxy, and star formation quickly stops. 

Their findings are published in the Astrophysical Journal.

"As stars go supernova, they pollute their environment by producing and releasing metals," said Michelle Jecmen, study first author and an undergraduate researcher. "We argue that at low metallicity—galaxy environments that are relatively unpolluted—there is a 10-million-year delay in the start of strong superwinds, which, in turn, results in higher star formation.” 

The U-M researchers point to what's called the Hubble tuning fork, a diagram that depicts the way astronomer Edwin Hubble classified galaxies. In the handle of the tuning fork are the largest galaxies. Huge, round and brimming with stars, these galaxies have already turned all of their gas into stars. Along the tines of the tuning fork are spiral galaxies that do have gas and star-forming regions along their compact arms. At the end of the tuning fork's tines are the least evolved, smallest galaxies.

"But these dwarf galaxies have just these really mondo star-forming regions," said U-M astronomer Sally Oey, senior author of the study. "There have been some ideas around why that is, but Michelle's finding offers a very nice explanation: These galaxies have trouble stopping their star formation because they don't blow away their gas."

Additionally, this 10-million-year period of quiet offers astronomers the opportunity to peer at scenarios similar to the cosmic dawn, a period of time just after the Big Bang, Jecmen said. In pristine dwarf galaxies, gas clumps together and forms gaps through which radiation can escape. This previously known phenomenon is called the "picket fence" model, with UV radiation escaping between slats in the fence. The delay explains why gas would have had time to clump together.

Ultraviolet radiation is important because it ionizes hydrogen—a process that also occurred right after the Big Bang, causing the universe to go from opaque to transparent. 

"And so looking at low-metallicity dwarf galaxies with lots of UV radiation is somewhat similar to looking all the way back to the cosmic dawn," Jecmen said. "Understanding the time near the Big Bang is so interesting. It's foundational to our knowledge. It's something that happened so long ago—it's so fascinating that we can see sort of similar situations in galaxies that exist today."

A second study, published in the Astrophysical Journal Letters and led by Oey, used the Hubble Space Telescope to look at Mrk 71, a region in a nearby dwarf galaxy about 10 million light years away. In Mrk 71, the team found observational evidence of Jecmen's scenario. Using a new technique with the Hubble Space Telescope, the team employed a filter set that looks at the light of triply ionized carbon. 

In more evolved galaxies with lots of supernova explosions, those explosions heat gas in a star cluster to very high temperatures—to millions of degrees Kelvin, Oey said. As this hot superwind expands, it blasts the rest of the gas out of the star clusters. But in low metallicity environments such as Mrk 71, where stars aren't blowing up, energy within the region is radiated away. It doesn't have the chance to form a superwind.

The team's filters picked up a diffuse glow of the ionized carbon throughout Mrk 71, demonstrating that the energy is radiating away. Therefore, there is no hot superwind, instead allowing dense gas to remain throughout the environment. 

Oey and Jecmen say there are many implications for their work.

"Our findings may also be important in explaining the properties of galaxies that are being seen at cosmic dawn by the James Webb Space Telescope right now," Oey said. "I think we're still in the process of understanding the consequences."

Studies:

Delayed massive-star mechanical feedback at low metallicity

Nebular C IV 1550 imaging of the metal-poor starburst Mrk 71: Direct evidence of catastrophic cooling

 

 

AI recognizes the tempo and stages of embryonic development


How can we reliably and objectively characterize the speed and various stages of embryonic development? With the help of artificial intelligence! Researchers at the University of Konstanz present an automated method.


Peer-Reviewed Publication

UNIVERSITY OF KONSTANZ



Animal embryos go through a series of characteristic developmental stages on their journey from a fertilized egg cell to a functional organism. This biological process is largely genetically controlled and follows a similar pattern across different animal species. Yet, there are differences in the details – between individual species and even among embryos of the same species. For example, the tempo at which individual embryonic stages are passed through can vary. Such variations in embryonic development are considered an important driver of evolution, as they can lead to new characteristics, thus promoting evolutionary adaptations and biodiversity.

Studying the embryonic development of animals is therefore of great importance to better understand evolutionary mechanisms. But how can differences in embryonic development, such as the timing of developmental stages, be recorded objectively and efficiently? Researchers at the University of Konstanz led by systems biologist Patrick Müller are developing and using methods based on artificial intelligence (AI). In their current article in Nature Methods, they describe a novel approach that automatically captures the tempo of development processes and recognizes characteristic stages without human input – standardized and across species boundaries.

Every embryo is a little different
Our current knowledge of animal embryogenesis and individual developmental stages is based on studies in which embryos of different ages were observed under the microscope and described in detail. Thanks to this painstaking manual work, reference books with idealized depictions of individual embryonic stages are available for many animal species today. "However, embryos often do not look exactly the same under the microscope as they do in the schematic drawings. And the transitions between individual stages are not abrupt, but more gradual," explains Müller. Manually assigning an embryo to the various stages of development is therefore not trivial even for experts and a bit subjective.

What makes it even more difficult: Embryonic development does not always follow the expected timetable. "Various factors can influence the timing of embryonic development, such as temperature," explains Müller. The AI-supported method he and his colleagues developed is a substantial step forward. For a first application example, the researchers trained their Twin Network with more than 3 million images of zebrafish embryos that were developing healthily. They then used the resulting AI model to automatically determine the developmental age of other zebrafish embryos.

Objective, accurate and generalizable
The researchers were able to demonstrate that the AI is capable of identifying key steps in zebrafish embryogenesis and detecting individual stages of development fully automatically and without human input. In their study, the researchers used the AI system to compare the developmental stage of embryos and describe the temperature dependence of embryonic development in zebrafish. Although the AI was trained with images of normally developing embryos, it was also able to identify malformations that can occur spontaneously in a certain percentage of embryos or that may be triggered by environmental toxins.

In a final step, the researchers transferred the method to other animal species, such as sticklebacks or the worm Caenorhabditis elegans, which is evolutionarily quite distant from zebrafish. "Once the necessary image material is available, our Twin Network-based method can be used to analyze the embryonic development of various animal species in terms of time and stages. Even if no comparative data for the animal species exists, our system works in an objective, standardized way", Müller explains. The method therefore holds great potential for studying the development and evolution of previously uncharacterized animal species.

 

Key facts:

 

  • EMBARGOED UNTIL THURSDAY, 23 NOVEMBER 2023, 17:00 CET (16:00 LONDON TIME, 11:00 U.S. EASTERN TIME)
  • Original publication: N. Toulany, H. Morales-Navarrete, D. ÄŒapek, J. Grathwohl, M. Ãœnalan & P. Müller (2023) Uncovering developmental time and tempo using deep learning. Nature Methods; doi: 10.1038/s41592-023-02083-8
  • Konstanz researchers develop AI model that objectively records characteristic stages and tempo of embryonic development in animals without human input
  • Open science: The authors have made the Twin-Network open source code and their research data available for free on GitHub and KonDATA.
  • Funding: European Research Council (ERC), German Research Foundation (DFG), Max Planck Society (MPG), European Molecular Biology Organization (EMBO), Interdisciplinary Graduate School of Medicine (IZKF) University of Tübingen, Blue Sky funding programme of the University of Konstanz

 

Note to editors:
You can download an image here:

 

Link: https://www.uni-konstanz.de/fileadmin/pi/fileserver/2023/embryonalentwicklung.jpg

Caption: Zebrafish embryos go through characteristic developmental stages, but even sibling embryos differ in the speed of these stages. Artificial intelligence can be used to calculate differences between embryos in terms of development tempo, characteristic developmental stages and structural differences.

Image: © Patrick Müller, Nikan Toulany

 

Researchers obtain promising results against capacity loss in vanadium batteries


A computational study conducted in Brazil could help extend the working lives of these batteries, which are widely used by utilities and manufacturers.


Peer-Reviewed Publication

FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO

Researchers obtain promising results against capacity loss in vanadium batteries 

IMAGE: 

THE STUDY INVOLVED COMPUTER SIMULATIONS DESIGNED TO FIND OUT HOW ION LEAKAGE BETWEEN THE ANOLYTE AND CATHOLYTE, CALLED TRANSPORT LOSS, LEADS TO BATTERY DEACTIVATION

view more 

CREDIT: CDMF




An article by researchers at the Center for Development of Functional Materials (CDMF) in Brazil describes a successful strategy to mitigate charge capacity loss in vanadium redox flow batteries, which are used by electric power utilities among other industries and can accumulate large amounts of energy. The article is published in the Chemical Engineering Journal.

CDMF is a Research, Innovation and Dissemination Center (RIDC) funded by FAPESP and hosted by the Federal University of São Carlos (UFSCar) in São Paulo state.

The study involved computer simulations designed to find out how ion leakage between the anolyte and catholyte, called transport loss, leads to battery deactivation, and how to mitigate this loss so as to keep ion concentration constant over time. Initially, the researchers estimated the effects of current density, active species concentration and volumetric flow on capacity loss. The second stage sought optimal conditions to minimize capacity loss based on the flow between electrolyte tanks in the opposite direction to cross-contamination (transport of electroactive species through the membrane). 

The results showed current density and active species concentration to be the main variables affecting capacity loss. According to the researchers, their approach successfully mitigated cross-contamination in different combinations of the two variables, providing an optimal flow between electrolyte tanks under different operating conditions.

Ernesto Pereira, last author of the article and a professor at UFSCar, noted that the main advantage of redox flow batteries is lack of electrode aging as the electroactive components are dissolved in solutions instead of being coated onto electrodes.

Commercial vanadium redox flow batteries are expected to have a longer lifetime than other types, although the study was conducted on a small scale. “Energy efficiency loss due to aging is minimal, given the slow pace of aging,” he said. 

The researchers explained that they are exploring and analyzing flow batteries computationally, with commercial batteries as a model, as part of a broader strategy that includes the development of novel organic substances for this type of battery.

About São Paulo Research Foundation (FAPESP)

The São Paulo Research Foundation (FAPESP) is a public institution with the mission of supporting scientific research in all fields of knowledge by awarding scholarships, fellowships and grants to investigators linked with higher education and research institutions in the State of São Paulo, Brazil. FAPESP is aware that the very best research can only be done by working with the best researchers internationally. Therefore, it has established partnerships with funding agencies, higher education, private companies, and research organizations in other countries known for the quality of their research and has been encouraging scientists funded by its grants to further develop their international collaboration. You can learn more about FAPESP at www.fapesp.br/en and visit FAPESP news agency at www.agencia.fapesp.br/en to keep updated with the latest scientific breakthroughs FAPESP helps achieve through its many programs, awards and research centers. You may also subscribe to FAPESP news agency at http://agencia.fapesp.br/subscribe.