Tuesday, December 05, 2023

AI NEWS

AI networks are more vulnerable to malicious attacks than previously thought


Peer-Reviewed Publication

NORTH CAROLINA STATE UNIVERSITY






Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.

At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system in order to confuse it. For example, someone might know that putting a specific type of sticker at a specific spot on a stop sign could effectively make the stop sign invisible to an AI system. Or a hacker could install code on an X-ray machine that alters the image data in a way that causes an AI system to make inaccurate diagnoses.

“For the most part, you can make all sorts of changes to a stop sign, and an AI that has been trained to identify stop signs will still know it’s a stop sign,” says Tianfu Wu, co-author of a paper on the new work and an associate professor of electrical and computer engineering at North Carolina State University. “However, if the AI has a vulnerability, and an attacker knows the vulnerability, the attacker could take advantage of the vulnerability and cause an accident.”

The new study from Wu and his collaborators focused on determining how common these sorts of adversarial vulnerabilities are in AI deep neural networks. They found that the vulnerabilities are much more common than previously thought.

“What’s more, we found that attackers can take advantage of these vulnerabilities to force the AI to interpret the data to be whatever they want,” Wu says. “Using the stop sign example, you could make the AI system think the stop sign is a mailbox, or a speed limit sign, or a green light, and so on, simply by using slightly different stickers – or whatever the vulnerability is.

“This is incredibly important, because if an AI system is not robust against these sorts of attacks, you don’t want to put the system into practical use – particularly for applications that can affect human lives.”

To test the vulnerability of deep neural networks to these adversarial attacks, the researchers developed a piece of software called QuadAttacK. The software can be used to test any deep neural network for adversarial vulnerabilities.

“Basically, if you have a trained AI system, and you test it with clean data, the AI system will behave as predicted. QuadAttacK watches these operations and learns how the AI is making decisions related to the data. This allows QuadAttacK to determine how the data could be manipulated to fool the AI. QuadAttacK then begins sending manipulated data to the AI system to see how the AI responds. If QuadAttacK has identified a vulnerability it can quickly make the AI see whatever QuadAttacK wants it to see.”

In proof-of-concept testing, the researchers used QuadAttacK to test four deep neural networks: two convolutional neural networks (ResNet-50 and DenseNet-121) and two vision transformers (ViT-B and DEiT-S). These four networks were chosen because they are in widespread use in AI systems around the world.

“We were surprised to find that all four of these networks were very vulnerable to adversarial attacks,” Wu says. “We were particularly surprised at the extent to which we could fine-tune the attacks to make the networks see what we wanted them to see.”

The research team has made QuadAttacK publicly available, so that the research community can use it themselves to test neural networks for vulnerabilities. The program can be found here: https://thomaspaniagua.github.io/quadattack_web/.

“Now that we can better identify these vulnerabilities, the next step is to find ways to minimize those vulnerabilities,” Wu says. “We already have some potential solutions – but the results of that work are still forthcoming.”

The paper, “QuadAttacK: A Quadratic Programming Approach to Learning Ordered Top-K Adversarial Attacks,” will be presented Dec. 16 at the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023), which is being held in New Orleans, La. First author of the paper is Thomas Paniagua, a Ph.D. student at NC State. The paper was co-authored by Ryan Grainger, a Ph.D. student at NC State.

The work was done with support from the U.S. Army Research Office, under grants W911NF1810295 and W911NF2210010; and from the National Science Foundation, under grants 1909644, 2024688 and 2013451.

AI helps us better understand (and protect) forests


Peer-Reviewed Publication

UNIVERSITÀ DI BOLOGNA




The coordinated work of over 150 scientists, complemented by the substantial computational capabilities of AI, seeks to enhance our understanding of forests. The primary goal is to investigate the evolving nature of these ecosystems and develop effective measures for their conservation. The results of this challenging research – promoted by the researchers of the Global Forest Biodiversity Initiative – have been published in three scientific articles. Two of these articles have been published in Nature, while the third finds its place in Nature Plants.

“Through the collaborative efforts of hundreds of researchers science has achieved a significant advancement in the understanding of the world’s forest ecology and in emphasising the need to protect them. Without the supercomputer’s computing power and AI, we would have needed decades and the workforce of thousands, all without the certainty of obtaining reliable estimates.” says Roberto Cazzolla Gatti, professor at the Department of Biological, Geological, and Environmental Sciences of the University of Bologna and co-author of the three studies.

ALIEN SPECIES AND NATIVE DIVERSITY
The first topic that the researchers focused on was the invasion of non-native trees: this is an essential phenomenon to understand, to safeguard native ecosystems and limit the spreading of invasive species. Which factors are triggering and facilitating this process? 

By analysing trees’ databases at a global level, researchers determined that the temperature and the precipitations serves as strong predictors of the invasion strategy: non-native species effectively invade an area when their environmental preferences align with those of the native community in conditions of extreme cold or drought.

However, there is another element which facilitates the diffusion of invasive species even more: human activity, particularly in environments such as managed forests or near roads and seaports.

“The invasion of a place by non-native trees is not only predicted by anthropic factors, but its gravity is also ruled by native diversity: a greater diversity reduces the gravity of the invasion. Rather than combating alien species when it's already too late, our focus should be on safeguarding the health of forests. This proactive approach, similar to other ecosystems, would pose greater challenges for alien species to spread and invade”, says Professor Cazzolla Gatti.

CARBON SINKS?
Nowadays we recognise the crucial role of protecting forests in preserving their capacity to capture carbon dioxide and serve as earth’s carbon sink. However, what is the global potential of carbon that forests can store?

The second study, published in Nature, focuses on that topic, and reveals that currently the global forest carbon is considerably below the natural potential. Almost 61% of this potential is located in areas hosting existing forests, where the safeguard of the ecosystem allows them to recover until reaching maturity. Instead, the remaining 39% is located outside urban and agricultural lands, but in regions in which forests have been either removed or fragmented.

“Forests alone cannot substitute the necessary reduction of CO2 emissions in the atmosphere. However, our results support the idea that the conservation, recovery, and sustainable management of different forests can offer precious contributions to reach the global goals for climate protection and biodiversity.

For the first time, we were able to verify that, despite the regional variations, predictions on global scale exhibit remarkable coherence, with only a 12% difference among the estimates obtained from the ground and those derived from the satellite. So, forests serve as a major earth’s carbon sink, however anthropogenic changes in climate and land use reduce their absorption capacity”, explains Professor Cazzolla Gatti.

LEAF TYPES AND CLIMATE CHANGE
While conducting research and analysis, researchers have gone even further. They have tried to understand in detail the factors that influence the global variation in tree leaves and the role of tree species in terrestrial ecosystems, including the cycle of carbon, water, and nutrients.

As a result, researchers have discovered that the global variation between evergreen and deciduous trees is mainly caused by isothermy and soil characteristics, while the type of leaves is determined by temperature. In particular, their estimate reveal that 38% of the world’s trees are evergreen with needle-shaped leaves, while 29% are broadleaved and 27% are broadleaved deciduous and 5% have needle-shaped leaves.

Professor Cazzolla Gatti explains, “Depending on future greenhouse gas emissions, by the end of the century, 17 to 34 percent of forest areas may undergo climate conditions that currently support a different type of forest: up to a third of the earth’s green areas will likely experience intense climate stress. The results of this study can improve the predictions on the functioning of forest ecosystems and the carbon cycle by quantifying the distribution of tree leaf types and corresponding biomass, identifying the areas in which climate change will exert the greater pressure on the current leaf types.”

Using AI to find microplastics


Researchers use AI to identify toxic substances in wastewater with greater accuracy and speed


Peer-Reviewed Publication

UNIVERSITY OF WATERLOO




An interdisciplinary research team from the University of Waterloo is using artificial intelligence (AI) to identify microplastics faster and more accurately than ever before.

Microplastics are commonly found in food and are dangerous pollutants that cause severe environmental damage – finding them is the key to getting rid of them.

The research team’s advanced imaging identification system could help wastewater treatment plants and food production industries make informed decisions to mitigate the potential impact of microplastics on the environment and human health. 

A comprehensive risk analysis and action plan requires quality information based on accurate identification. In search of a robust analytical tool that could enumerate, identify and describe the many microplastics that exist, project lead Dr. Wayne Parker and his team, employed an advanced spectroscopy method which exposes particles to a range of wavelengths of light. Different types of plastics produce different signals in response to the light exposure. These signals are like fingerprints that can also be employed to mark particles as microplastic or not.

The challenge researchers often find is that microplastics come in wide varieties due to the presence of manufacturing additives and fillers that can blur the “fingerprints” in a lab setting. This makes identifying microplastics from organic material, as well as the different types of microplastics, often difficult. Human intervention is usually required to dig out subtle patterns and cues, which is slow and prone to error. 

“Microplastics are hydrophobic materials that can soak up other chemicals,” said Parker, a professor in Waterloo’s Department of Civil and Environmental Engineering. “Science is still evolving in terms of how bad the problem is, but it’s theoretically possible that microplastics are enhancing the accumulation of toxic substances in the food chain.”

Parker approached Dr. Alexander Wong, a professor in Waterloo’s Department of Systems Design Engineeringand the Canada Research Chair in Artificial Intelligence and Medical Imaging for assistance. With his help, the team developed an AI tool called PlasticNet that enables researchers to rapidly analyze large numbers of particles approximately 50 per cent faster than prior methods and with 20 per cent more accuracy.

The tool is the latest sustainable technology designed by Waterloo researchers to protect our environment and engage in research that will contribute to a sustainable future.

“We built a deep learning neural network to enhance microplastic identification from the spectroscopic signals,” said Wong. “We trained it on data from existing literature sources and our own generated images to understand the varied make-up of microplastics and spot the differences quickly and correctly— regardless of the fingerprint quality.”

Parker’s former PhD student, Frank Zhu, tested the system on microplastics isolated from a local wastewater treatment plant. Results show that it can identify microplastics with unprecedented speed and accuracy. This information can empower treatment plants to implement effective measures to control and eliminate these substances. 

The next steps involve continued learning and testing, as well as feeding the PlasticNet system more data to increase the quality of its microplastics identification capabilities for application across a broad range of needs. 

More information about this work can be found in the research paper, “Leveraging deep learning for automatic recognition of microplastics (MPs) via focal plane array (FPA) micro-FT-IR imaging”, published in Environmental Pollution. 

Enhanced AI tracks neurons in moving animals


Peer-Reviewed Publication

ECOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE

Two-dimensional projection of 3D volumetric brain activity recordings in C. elegans. 

VIDEO: 

TWO-DIMENSIONAL PROJECTION OF 3D VOLUMETRIC BRAIN ACTIVITY RECORDINGS IN C. ELEGANS. GREEN: GENETICALLY ENCODED CALCIUM INDICATOR, VARIOUS COLORS: SEGMENTED AND TRACKED NEURONS.

view more 

CREDIT: MAHSA BARZEGAR-KESHTELI (EPFL)



Recent advances allow imaging of neurons inside freely moving animals. However, to decode circuit activity, these imaged neurons must be computationally identified and tracked. This becomes particularly challenging when the brain itself moves and deforms inside an organism’s flexible body, e.g. in a worm. Until now, the scientific community has lacked the tools to address the problem.

Now, a team of scientists from EPFL and Harvard have developed a pioneering AI method to track neurons inside moving and deforming animals. The study, now published in Nature Methods, was led by Sahand Jamal Rahi at EPFL’s School of Basic Sciences.

The new method is based on a convolutional neural network (CNN), which is a type of AI that has been trained to recognize and understand patterns in images. This involves a process called “convolution”, which looks at small parts of the picture – like edges, colors, or shapes – at a time and then combines all that information together to make sense of it and to identify objects or patterns.

The problem is that to identify and track neurons during a movie of an animal’s brain, many images have to be labeled by hand because the animal appears very differently across time due to the many different body deformations. Given the diversity of the animal’s postures, generating a sufficient number of annotations manually to train a CNN can be daunting.

To address this, the researchers developed an enhanced CNN featuring ‘targeted augmentation’. The innovative technique automatically synthesizes reliable annotations for reference out of only a limited set of manual annotations. The result is that the CNN effectively learns the internal deformations of the brain and then uses them to create annotations for new postures, drastically reducing the need for manual annotation and double-checking.

The new method is versatile, being able to identify neurons whether they are represented in images as individual points or as 3D volumes. The researchers tested it on the roundworm Caenorhabditis elegans, whose 302 neurons have made it a popular model organism in neuroscience.

Using the enhanced CNN, the scientists measured activity in some of the worm’s interneurons (neurons that bridge signals between neurons). They found that they exhibit complex behaviors, for example changing their response patterns when exposed to different stimuli, such as periodic bursts of odors.

The team have made their CNN accessible, providing a user-friendly graphical user interface that integrates targeted augmentation, streamlining the process into a comprehensive pipeline, from manual annotation to final proofreading.

“By significantly reducing the manual effort required for neuron segmentation and tracking, the new method increases analysis throughput three times compared to full manual annotation,” says Sahand Jamal Rahi. “The breakthrough has the potential to accelerate research in brain imaging and deepen our understanding of neural circuits and behaviors.”

Other contributors

Swiss Data Science Center

 

How to identify vintage wines by their chemical signature


A team from UNIGE and ISVV– University of Bordeaux has revealed how to find the exact origin of a wine based solely on its chemical components.


Peer-Reviewed Publication

UNIVERSITÉ DE GENÈVE




Does every wine carry its own chemical signature and, if so, can this be used to identify its origin? Many specialists have tried to solve this mystery, without fully succeeding. By applying artificial intelligence tools to existing data, a team from the University of Geneva (UNIGE), in collaboration with the Institute of Vine and Wine Science at the University of Bordeaux, has succeeded in identifying with 100% accuracy the chemical mark of red wines from seven major estates in the Bordeaux region. These results, published in the journal Communications Chemistry, pave the way for potential new tools to combat counterfeiting and for predictive tools to guide decision-making in the wine sector.


Every wine is the result of fine, complex mixtures of thousands of molecules. Their concentrations fluctuate according to the composition of the grapes, which depends in particular on the nature and structure of the soil, the grape variety and the winegrower’s practices. These variations, even very small ones, can have a big impact on the taste of wine. This makes it very difficult to determine the precise origin of a wine based on this sensory criterion alone.With climate change, new consumer habits and an increase in counterfeiting, the need for effective tools to determine the identity of wines has become crucial.


Is there then a chemical signature, invariable and specific to each estate, that would make it possible to do this? ‘‘The wine sector has made numerous attempts to answer this question, with questionable or sometimes correct results but involving heavy techniques. This is due to the great complexity of the blends and the limitations of the methods used, which are a bit like looking for a needle in the middle of a haystack,’’ explains Alexandre Pouget, full professor in the Department of Basic Neurosciences in the Faculty of Medicine at UNIGE.


One of the methods used is gas chromatography. This consists in separating the components of a mixture by affinity between two materials. The mixture passes through a very thin tube, 30 metres long. The components that have the greatest affinity with the tube material gradually separate from the others. Each separation is recorded by a ‘‘mass spectrometer’’. A chromatogram  is then produced, showing ‘‘peaks’’ that indicate the molecular separations. In the case of wine, because of the many molecules that make it up, these peaks are extremely numerous, making detailed and exhaustive analysis very difficult.


Data processed by machine learning

In collaboration with Stephanie Marchand’s team from the Institute of Vine and Wine Science at the University of Bordeaux, Alexandre Pouget’s team found the solution by combining chromatograms and artificial intelligence tools. These chromatograms came from 80 red wines from twelve vintages (1990-2007) and from seven estates in the Bordeaux region. This raw data was processed using machine learning, a field of artificial intelligence in which algorithms learn to identify recurring patterns in sets of information.


‘‘Instead of extracting specific peaks and deducing concentrations, this method allowed us to take into account each wine’s complete chromatograms - which can comprise up to 30,000 points - including ‘‘background noise’’, and to summarise each chromatogram into two X and Y coordinates, after eliminating unnecessary variables. This process is called dimensionality reduction’’, explains Michael Schartner, a former postdoctoral scholar in the Department of Basic Neurosciences in the Faculty of Medicine at UNIGE, and first author of the study.


A 100% reliable model

By placing the new coordinates on a graph, the researchers were able to see seven ‘‘clouds’’ of points. They found that each of these clouds grouped together vintages from the same estate on the basis of their chemical similarities. ‘‘This allowed us to show that each estate does have its own chemical signature. We also observed that three wines were grouped together on the right and four on the left, which corresponds to the two banks of the Garonne on which these estates are located,’’ explains Stéphanie Marchand, a professor at the Institute of Vine and Wine Science at the University of Bordeaux, and co-author of the study.


Throughout their analyses, the researchers found that the chemical identity of these wines was not defined by the concentration of a few specific molecules, but by a broad chemical spectrum. ‘‘Our results show that it is possible to identify the geographical origin of a wine with 100% accuracy, by applying dimensionality reduction techniques to gas chromatograms,’’ says Alexandre Pouget, who led this research.


This research provides new insights into the components of a wine’s identity and sensory properties. It also paves the way for the development of tools to support decision-making - to preserve the identity and expression of a terroir, for example - and to combat counterfeiting more effectively.

 

Why regional differences in global warming are critical


New data analyses allow better evaluation of climate models


Peer-Reviewed Publication

MARUM - CENTER FOR MARINE ENVIRONMENTAL SCIENCES, UNIVERSITY OF BREMEN

Planktonic foraminifera 

IMAGE: 

PLANKTONIC FORAMINIFERA ARE MICROORGANISMS THAT LIVE IN THE UPPERMOST WATER LAYERS OF ALL OCEANS. WHEN THEY DIE THEIR SMALL CALCAREOUS SHELLS SINK TO THE SEAFLOOR AND REMAIN PRESERVED IN THE SEDIMENTS THERE. THE FOSSIL FORAMINIFERA DOCUMENT THE CONDITIONS IN THE OCEANS AND THEIR STUDY ENABLES A VIEW INTO THE PAST. PHOTO: MARUM – CENTER FOR MARINE ENVIRONMENTAL SCIENCES, UNIVERSITY OF BREMEN; M. KUCERA

view more 

CREDIT: MARUM – CENTER FOR MARINE ENVIRONMENTAL SCIENCES, UNIVERSITY OF BREMEN; M. KUCERA




Scientists use climate models to simulate past climate, in order to determine how and why it has changed. As a result of man-made climate change it is not possible to apply models directly to the future, because the boundary conditions have changed. “We thus have to simulate the past in order to test the models. Simulations of climate from the Last Glacial Maximum, the LGM, are therefore important in the evaluation of climate models,” says first author Lukas Jonkers, adding that the glacial maximum provides a good test scenario. “Because how much the Earth has warmed since then could generally reflect what we can expect in the future.”

Although previous studies have shown that the overall change in global climate from the LGM until the present is reasonably consistent between the models and paleoclimate reconstructions, the spatial temperature patterns that impact ecosystems and habitats and directly affects human society have not been sufficiently considered.

New approach is based on a fundamental macroecological principle

To check whether the simulations provide an accurate picture of past climate, researchers compare them with reconstructions based on fossils. Both approaches possess a certain degree of uncertainty. Thus, when the two disagree, is it because of a problem with the simulation or the reconstruction? To be able to better test and evaluate climate models, Dr. Lukas Jonkers of MARUM and his co-authors have designed a new approach, which they have now presented in the journal Nature Geoscience. By applying a fundamental macroecological principle, the approach reduces the uncertainty of traditional reconstruction methods. This principle is that the further apart species communities are, the more they differ. A well-known example of this is the change in the vegetation with increasing altitude.

“In the marine realm, we see the same pattern of a reduction of the similarity between species communities. The further we move from the equator toward the poles the more the species change,” says Jonkers. “In the ocean this decreasing similarity is closely correlated to temperature. So, if the climate models predict past temperatures correctly, then we should, when we compare the simulated past temperatures with fossil species communities, observe this decline in similarity with increasing temperature difference.” Researchers can therefore use plankton distribution data from the glacial maximum to assess whether the simulated temperatures for the LGM can reproduce the same pattern of decreasing similarity of the assemblages as we observe it today.

For their study, the international team investigated more than 2,000 species assemblages of planktonic foraminifera from 647 sites. Planktonic foraminifera are widely-distributed marine plankton that live in the upper water layers of all the oceans. When they die their small shells sink to the seafloor and are preserved in the sediments.

The team discovered a different pattern of species similarity decline in the ice age data than observed in modern plankton. They interpreted this as evidence that the simulated temperatures do not represent the true ice-age temperatures.

“Our analysis indicates that the simulated temperatures were too warm in the North Atlantic and too uniform globally. New simulations using weaker ocean circulation that transports less heat to the north, resulting in a cooler North Atlantic, fit the pattern better,” explains Lukas Jonkers. The underlying reason for this is related to the strength of the Atlantic Meridional Overturning Circulation and ice-ocean interactions. The researchers conclude that the new method makes model comparisons more reliable. The new simulations show too that the models can in principle correctly calculate the temperature pattern during the last high ice age. According to the team of authors, this indicates that a correct prediction of the spatial temperature pattern – I f the right processes are taken into account – Is also possible for the future.

More emphasis on the spatial impact of climate change

“Global climate change will have different impacts in different regions. This is important, as our society and the ecosystems depend on what happens directly around us” concludes Jonkers. "Our study highlights the need to investigate the spatial effects of climate change. This is important when we talk about limiting global warming to 1.5 degrees, because this value only refers to a global average.”

The publication appears as part of the PalMod climate modeling initiative funded by the Federal Ministry of Education and Research (BMBF). Under this initiative, researchers are working to decipher the climate of the past 130,000 years in order to predict the climate of the future. Their goal is to understand the scope of the models and the parameters on which they are based, and to make better predictions for the future.

The study is the result of a cooperative effort between researchers at the University of Bremen (MARUM and Faculty of Geosciences) and the University of Oldenburg under the framework of the Cluster of Excellence “The Ocean Floor – Earth’s Uncharted Interface”. Scientists from the Alfred Wegener Institute Helmholtz Center for Polar and Marine Research Potsdam and Bremerhaven, as well as the Southern Marine Science and Engineering Guangdong Laboratory Zuhai (China) and Oregon State University (USA) are also involved in the study.

 

Scientific contact:

Dr. Lukas Jonkers
MARUM – Center for Marine Environmental Sciences, University of Bremen
Micropaleontology – Paleoceanography
Email: ljonkers@marum.de

 

MARUM produces fundamental scientific knowledge about the role of the ocean and the seafloor in the total Earth system. The dynamics of the oceans and the seabed significantly impact the entire Earth system through the interaction of geological, physical, biological and chemical processes. These influence both the climate and the global carbon cycle, resulting in the creation of unique biological systems. MARUM is committed to fundamental and unbiased research in the interests of society, the marine environment, and in accordance with the sustainability goals of the United Nations. It publishes its quality-assured scientific data to make it publicly available. MARUM informs the public about new discoveries in the marine environment and provides practical knowledge through its dialogue with society. MARUM cooperation with companies and industrial partners is carried out in accordance with its goal of protecting the marine environment.

 

 

Exposure to soft robots decreases human fears about working with them


Peer-Reviewed Publication

WASHINGTON STATE UNIVERSITY

Soft Robot WSU Luo Lab1 

IMAGE: 

WASHINGTON STATE UNIVERSITY DOCTORAL STUDENTS JUSTIN ALLEN, LEFT, AND RYAN DOROSH, DEMONSTRATE A SOFT ROBOT IN DEVELOPMENT AT WSU. 

view more 

CREDIT: DEAN HARE, WASHINGTON STATE UNIVERSITY PHOTO SERVICES




VANCOUVER, Wash. – Seeing robots made with soft, flexible parts in action appears to lower people’s anxiety about working with them or even being replaced by them.

A Washington State University study found that watching videos of a soft robot working with a person at picking and placing tasks lowered the viewers’ safety concerns and feelings of job insecurity. This was true even when the soft robot was shown working in close proximity to the person. This finding shows soft robots hold a potential psychological advantage over rigid robots made of metal or other hard materials.

“Prior research has generally found that the closer you are to a rigid robot, the more negative your reactions are, but we didn't find those outcomes in this study of soft robots,” said lead author Tahira Probst, a WSU psychology professor.

Currently, human and rigid robotic workers have to maintain a set distance for safety reasons, but as this study indicates, proximity to soft robots could be not only physically safer but also more psychologically accepted.

“This finding needs to be replicated, but if it holds up, that means humans could work together more closely with the soft robots,” Probst said.  

The study, published in the journal IISE Transactions on Occupational Ergonomics and Human Factors, did find that faster interactions with a soft robot tended to cause more negative responses, but when the study participants had previous experience with robots, faster speed did not bother them. In fact, they preferred the faster interactions. This reinforces the finding that greater familiarity increased overall comfort with soft robots.

About half of all occupations are highly likely to involve some type of automation within the next couple decades, said Probst, particularly those related to production, transportation, extraction and agriculture.

Soft robots, which are made with flexible materials like fabric and rubber, are still relatively new technology compared to rigid robots which are already widely in use in manufacturing.

Rigid robots have many limitations including their high cost and high safety concerns – two problems soft robots can potentially solve, said study co-author Ming Luo, an assistant professor in WSU’s School of Mechanical and Materials Engineering.

“We make soft robots that are naturally safe, so we don’t have to focus a lot on expensive hardware and sensors to guarantee safety like has to be done with rigid robots,” said Luo.

As an example, Luo noted that one rigid robot used for apple picking could cost around $30,000 whereas the current research and development cost for one soft robot, encompassing all components and manufacturing, is under $5,000. Also, that cost could be substantially decreased if production were scaled up.

Luo’s team is in the process of developing soft robots for a range of functions, including fruit picking, pruning and pollinating. Soft robots also have the potential help elderly or disabled people in home or health care settings. Much more development has to be done before this can be a reality, Luo said, but his engineering lab has partnered with Probst’s psychology team to better understand human-robot interactions early in the process.

“It’s good to know how humans will react to the soft robots in advance and then incorporate that information into the design,” said Probst. “That's why we're working in tandem, where the psychology side is informing the technical development of these robots in their infancy.”

To further test this study’s findings, the researchers are planning to bring participants into the lab to interact directly with soft robots. In addition to collecting participants self-reported surveys, they will also measure participants’ physical stress reactions, such as heart rate and galvanic skin responses, which are changes in the skin’s electrical resistance in reaction to emotional stress.

Soft robot attachment WSU

 

Laser additive manufacturing: Listening for defects as they happen


Researchers from EPFL have resolved a long-standing debate surrounding laser additive manufacturing processes with a pioneering approach to defect detection.


Peer-Reviewed Publication

ECOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE

A graphic representation of the exeperimental setup for listening for printing defects 

IMAGE: 

A GRAPHIC REPRESENTATION OF THE EXEPERIMENTAL SETUP FOR LISTENING FOR PRINTING DEFECTS © 2023 EPFL / TITOUAN VEUILLET - CC-BY-SA 4.0

view more 

CREDIT: © 2023 EPFL / TITOUAN VEUILLET - CC-BY-SA 4.0




Researchers from EPFL have resolved a long-standing debate surrounding laser additive manufacturing processes with a pioneering approach to defect detection.

The progression of laser additive manufacturing — which involves 3D printing of metallic objects using powders and lasers — has often been hindered by unexpected defects. Traditional monitoring methods, such as thermal imaging and machine learning algorithms, have shown significant limitations. They often either overlook defects or misinterpret them, making precision manufacturing elusive and barring the technique from essential industries like aeronautics and automotive manufacturing. But what if it were possible to detect defects in real time based on the differences in the sound the printer makes during a flawless print and one with irregularities? Up until now, the prospect of detecting these defects this way was deemed unreliable. However, researchers at the Laboratory of Thermomechanical Metallurgy (LMTM) at EPFL's School of Engineering have successfully challenged this assumption.

Professor Roland Logé, the head of the laboratory, stated, "There's been an ongoing debate regarding the viability and effectiveness of acoustic monitoring for laser-based additive manufacturing. Our research not only confirms its relevance but also underscores its advantage over traditional methods."

This research is of paramount importance to the industrial sector as it introduces a groundbreaking, yet cost-effective solution to monitor and improve the quality of products made through Laser Powder Bed Fusion (LPBF). Lead researcher, Dr. Milad Hamidi Nasab, remarked, "The synergy of synchrotron X-ray imaging with acoustic recording provides real-time insight into the LPBF process, facilitating the detection of defects that could jeopardize product integrity." In an era where industries continuously strive for efficiency, precision, and waste reduction, these innovations not only result in significant cost savings but also boost the dependability and security of manufactured products.

How Does LPBF Manufacturing Work?

LPBF is a cutting-edge method that's reshaping metal manufacturing. Essentially, it uses a high-intensity laser to meticulously melt minuscule metal powders, creating layer upon layer to produce detailed 3D metallic constructs. Think of LPBF as the metallic version of a conventional 3D printer, but with an added degree of sophistication. Rather than melted plastic, it employs a fine layer of microscopic metal powder, which can vary in size from the thickness of a human hair to a fine grain of salt (15–100 μm). The laser moves across this layer, melting specific patterns based on a digital blueprint. This technique enables the crafting of bespoke, complex parts like lattice structures or distinct geometries, with minimal excess. Nevertheless, this promising method isn't devoid of challenges.

When the laser interacts with the metal powder, creating what is known as a melt pool, it fluctuates between liquid, vapor, and solid phases. Occasionally, due to variables such as the laser's angle or the presence of specific geometrical attributes of the powder or of the part, the process might falter. These instances, termed "inter-regime instabilities", can sometimes prompt shifts between two melting methods, known as "conduction" and "keyhole" regimes. During unstable keyhole regimes, when the molten powder pool delves deeper than intended, it can create pockets of porosity, culminating in structural flaws in the end product. To facilitate the measurement of the width and depth of the melt pool in X-ray images, the Image Analysis Hub of the EPFL Center for Imaging developed an approach that makes it easier to visualize small changes associated with the liquid metal and a tool for annotating the melt pool geometry.

Detecting These Defects Using Sound

In a joint venture with the Paul Scherrer Institute (PSI) and the Swiss Federal Laboratories for Materials Science and Technology (Empa), the EPFL team formulated an experimental design that melded operando X-ray imaging experiments with acoustic emission measurements. The experiments were conducted at the TOMCAT beamline of the Swiss Light Source at PSI, with the miniaturized LPBF printer developed in the group of Dr. Steven Van Petegem. The amalgamation with an ultra-sensitive microphone, positioned inside the printing chamber, pinpointed distinct shifts in the acoustic signal during regime transitions, thereby directly identifying defects during manufacturing.

A pivotal moment in the research was the introduction of an adaptive filtering technique by signal processing expert Giulio Masinelli from Empa. "This filtering approach," Masinelli emphasized, " allows us to discern, with unparalleled clarity, the relationship between defects and the accompanying acoustic signature." Unlike typical machine learning algorithms, which excel at extracting patterns from statistical data, but are often tailored to specific scenarios, this approach provides broader insights on the physics of melting regimes, while offering superior temporal and spatial precision.

With this research, EPFL contributes valuable insights to the field of laser additive manufacturing. The findings have significant implications for potential industrial applications, particularly in sectors like aerospace and precision engineering. Reinforcing Switzerland's reputation for meticulous craftsmanship and manufacturing accuracy, the study underscores the need for consistent manufacturing techniques. Furthermore, it suggests the potential for early detection and correction of defects, enhancing product quality. Professor Logé concludes, "This research paves the way for a better understanding and refinement of the manufacturing process, and will ultimately lead to higher product reliability in the long term."

References

Hamidi Nasab, M., Masinelli, G., de Formanoir, C., Schlenger, L., Van Petegem, S., Esmaeilzadeh, R., Wasmer, K., Ganvir, A., Salminen, A., Aymanns, F., Marone, F., Pandiyan, V., Goel, S., & Logé, R. (2023). Harmonizing Sound and Light: X-Ray Imaging Unveils Acoustic Signatures of Stochastic Inter-Regime Instabilities during Laser Melting . Nature Communications. DOI: 10.1038/s41467-023-43371-3

 

Individually targeted therapies may improve treatment for psychosis


Peer-Reviewed Publication

UNIVERSITY OF SOUTHAMPTON




A paper from the University of Southampton examining how best to treat psychosis has concluded that a greater range of individually targeted therapies could improve outcomes for patients.

The research questions if Cognitive Behaviour Therapy (CBTp) for psychosis should remain the dominant treatment and suggests that, in the future, big data and artificial intelligence may help to develop a range of more bespoke therapies.

CBTp was introduced in the 1990s and after evaluation in a large number of clinical trials, it became an established treatment for psychosis. Now, psychologists at the universities of Southampton and Sheffield have asked if less complex, less costly approaches may be as, or more, effective.

Lead author on the paper, Professor Katherine Newman-Taylor of the School of Psychology at the University of Southampton, explains: “Our article asks whether CBTp benefits people with early psychosis and those with schizophrenia-related diagnoses in terms of clinical, functioning and recovery outcomes. Also, for young people with mental health conditions who are at high risk of developing psychosis.

“While acknowledging the benefits CBTp can have for some, we wanted to consider if we should now look elsewhere to improve outcomes and if refining existing therapies could better meet the needs of people with psychosis.”

Psychosis is when a person perceives or interprets reality in a very different way from others. It may involve hallucinations, delusions and disorganised thinking and speech. The term psychosis describes symptoms across a range of conditions but is typically associated with the diagnosis schizophrenia.

Psychosis can lead to feeling scared, anxious, threatened, confused and overwhelmed. CBTp works by helping people to make sense of their early life experiences, and current thoughts, feelings and behaviours, for example when hearing voices or in the grip of paranoia. Therapy involves working collaboratively to build the person’s ability and confidence that they can do what’s important to them, even if the voices, paranoia and other symptoms of psychosis persist.

The Southampton and Sheffield researchers examined two umbrella reviews conducted by other researchers in 2019 and 2023. An umbrella review provides a very high level analysis of a wide range and large number of past research papers to help reach conclusions about a topic or issue.

The team used these recent umbrella reviews to give a ‘bird’s eye view’ of the effectiveness of CBTp to treat psychosis in different groups of people. Their findings are published in a journal of The British Psychological Society.

The paper concludes that large scale analysis of treatment outcomes from pooled data is masking important nuances. While many are benefitting from CBTp, some patients only experience modest outcomes and others may be harmed by it.

The team says that by focusing on the therapeutic relationship and particular processes – such as worry and past trauma, clinicians would be able to help people more effectively.

They also propose the development of large datasets, interpreted by sophisticated AI machine learning tools, to help aid decisions about treatments. These may include CBTp alongside other approaches, such as working with the whole family, and setting up informal peer support networks early in the treatment process.

Professor Katherine Newman-Taylor concludes: “We predict that over the next 10 years, large, continually evolving datasets, built from patient experience, will be used to shape precision psychological therapies.

“Using data to determine treatment outcomes will help us to choose the right evidence-based therapy for an individual. However, it is vital that we use these methods to make decisions jointly with patients, and only work with organisations who we trust to manage our health data securely and ethically.”

Ends

 

Notes to Editors
 

  1. The article ‘Cognitive behavioural therapy for psychosis: The end of the line or time for a new approach?’ is published in the journal Psychology and Psychotherapy, DOI: 10.1111/papt.12498: https://doi.org/10.1111/papt.12498#
     
  2. For interviews contact, Peter Franklin, Media Relations, University of Southampton. press@soton.ac.uk m 07748321087
     
  3. More about Psychology at the University of Southampton can be found here: https://www.southampton.ac.uk/about/faculties-schools-departments/school-of-psychology
     
  4. The University of Southampton drives original thinking, turns knowledge into action and impact, and creates solutions to the world’s challenges. We are among the top 100 institutions globally (QS World University Rankings 2023). Our academics are leaders in their fields, forging links with high-profile international businesses and organisations, and inspiring a 22,000-strong community of exceptional students, from over 135 countries worldwide. Through our high-quality education, the University helps students on a journey of discovery to realise their potential and join our global network of over 200,000 alumni. www.southampton.ac.uk