Saturday, October 28, 2023

 

How adults understand what kids are saying


It’s not easy to parse young children’s words, but adults’ beliefs about what children want to communicate helps make it possible, a new study finds


Peer-Reviewed Publication

MASSACHUSETTS INSTITUTE OF TECHNOLOGY




CAMBRIDGE, MA -- When babies first begin to talk, their vocabulary is very limited. Often one of the first sounds they generate is “da,” which may refer to dad, a dog, a dot, or nothing at all.

How does an adult listener make sense of this limited verbal repertoire? A new study from MIT and Harvard University researchers has found that adults’ understanding of conversational context and knowledge of mispronunciations that children commonly make are critical to the ability to understand children’s early linguistic efforts. 

Using thousands of hours of transcribed audio recordings of children and adults interacting, the research team created computational models that let them start to reverse engineer how adults interpret what small children are saying. Models based on only the actual sounds children produced in their speech did a relatively poor job predicting what adults thought children said. The most successful models made their predictions based on large swaths of preceding conversations that provided context for what the children were saying. The models also performed better when they were retrained on large datasets of adults and children interacting.

The findings suggest that adults are highly skilled at making these context-based interpretations, which may provide crucial feedback that helps babies acquire language, the researchers say.

“An adult with lots of listening experience is bringing to bear extremely sophisticated mechanisms of language understanding, and that is clearly what underlies the ability to understand what young children say,” says Roger Levy, a professor of brain and cognitive sciences at MIT. “At this point, we don’t have direct evidence that those mechanisms are directly facilitating the bootstrapping of language acquisition in young children, but I think it’s plausible to hypothesize that they are making the bootstrapping more effective and smoothing the path to successful language acquisition by children.”

Levy and Elika Bergelson, an associate professor of psychology at Harvard, are the senior authors of the study, which appears today in Nature Human Behavior. MIT postdoc Stephan Meylan is the lead author of the paper.

Adult listening skills are critical

While many studies have investigated how children learn to speak, in this project, the researchers wanted to flip the question and study how adults interpret what children say. 

“While people have looked historically at a number of features of the learner, and what is it about the child that allows them to learn things from the world, very little has been done to look at how they are understood and how that might influence the process of language acquisition,” Meylan says.

Previous research has shown that when adults speak to each other, they use their beliefs about how other people are likely to talk, and what they’re likely to talk about, to help them understand what their conversational partner is saying. This strategy, known as “noisy channel listening,” makes it easier for adults to handle the complex task of deciphering the acoustic sounds they’re hearing, especially in environments where voices are muffled or there is a lot of background noise, or when speakers have different accents.

In this study, the researchers explored whether adults can also apply this technique to parsing the often seemingly nonsensical utterances produced by children who are learning to talk.

“This problem of interpreting what we hear is even harder for child language than ordinary adult language understanding, which is actually not that easy either, even though we’re very good at it,” Levy says. 

For this study, the researchers made use of datasets originally generated at Brown University in the early 2000s, which contain hundreds of hours of transcribed conversations between children ages 1 to 3 and their caregivers. The data include both phonetic transcriptions of the sounds produced by the children and the text of what the transcriber believed the child was trying to say.

The researchers used other datasets of child language (which included about 18 million spoken words) to train computational language models to predict what words the children were saying in the original dataset, based on the phonetic transcription. Using neural networks, they created many different models, which varied in the sophistication of their knowledge of conversational topics, grammar, and children’s mispronunciations. They also manipulated how much of the conversational context each model was allowed to analyze before making its predictions of what the children said. Some models took into account just one or two words spoken before the target word, while others were allowed to analyze up to 20 previous utterances in the exchange.

The researchers found that using the acoustics of what the child said alone did not lead to models that were particularly accurate at predicting what adults thought children said. The models that did best used very rich representations of conversational topics, grammar, and beliefs about what words children are likely to say (ball, dog or baby, rather than mortgage, for example). And much like humans, the models’ predictions improved as they were allowed to consider larger chunks of previous exchanges for context. 

A feedback system

The findings suggest that when listening to children, adults base their interpretation of what a child is saying on previous exchanges that they have had. For example, if a dog had been mentioned earlier in the conversation, “da” was more likely to be interpreted by an adult listener as “dog.”

This is an example of a strategy that humans often use in listening to other adults, which is to base their interpretation on “priors,” or expectations based on prior experience. The findings also suggest that when listening to children, adult listeners incorporate expectations of how children commonly mispronounce words, such as “weed” for “read.”

The researchers now plan to explore how adults’ listening skills, and their subsequent responses to children, may help to facilitate children’s ability to learn language.

“Most people prefer to talk to others, and I think babies are no exception to this, especially if there are things that they might want, either in a tangible way, like milk or to be picked up, but also in an intangible way in terms of just the spotlight of social attention,” Bergelson says. “It’s a feedback system that might push the kid, with their burgeoning social skills and cognitive skills and everything else, to continue down this path of trying to interact and communicate.”

One way the researchers hope to study this interplay between child and adult is by combining computational models of how children learn language with the new model of how adults respond to what children say.

“We now have this model of an adult listener that we can plug into models of child learners, and then those learners can leverage the feedback provided by the adult model,” Meylan says. “The next frontier is trying to understand how kids are taking the feedback that they get from these adults and build a model of what these children expect that an adult would understand.” 

###

The research was funded by the National Science Foundation, the National Institutes of Health, and a CONVO grant to MIT’s Department of Brain and Cognitive Sciences from the Simons Center for the Social Brain.

 

 

NSF awards up to $21.4M for design of next-gen telescopes to capture earliest moments of universe


Instruments would help us understand the beginning, history, and makeup of the universe

Grant and Award Announcement

UNIVERSITY OF CHICAGO

CMBS4 1 

IMAGE: 

THE SOUTH POLE TELESCOPE

view more 

CREDIT: UCHICAGO




The National Science Foundation has awarded $3.7 million to the University of Chicago for the first year of a grant that may provide up to $21.4 million for the final designs for a next-generation set of telescopes to map the light from the earliest moments of the universe—the Cosmic Microwave Background.

Led by the University of Chicago and Lawrence Berkeley National Laboratory, the collaboration seeks to build telescopes and infrastructure in both Antarctica and Chile to search for what are known as “primordial” gravitational waves—the vibrations from the Big Bang itself. It would also map the microwave light from the cosmos in incredible detail and reveal how the universe evolved over time, as well as investigate the mystery known as dark matter.

This award will fund the continuing designs for the telescopes and cameras, working towards construction readiness. The entire project, known as CMB-S4, is proposed to be jointly funded by the National Science Foundation and the U.S. Department of Energy; it is expected to cost on the order of $800M and to come fully online in the early 2030s. The collaboration currently involves 450 scientists from more than 100 institutions, spanning 20 countries.

“With these telescopes we will be testing our theory of how our entire universe came to be, but also looking at physics at the most extreme scales in a way we simply cannot do with particle physics experiments on Earth,” said John Carlstrom, the Subrahmanyan Chandrasekhar Distinguished Service Professor of Astronomy and Astrophysics and Physics, who serves as the project scientist for CMB-S4.

The biggest questions

The cosmic microwave background is the light still traveling across the universe from the earliest moments after the Big Bang. Because it carries information about the birth of the universe, scientists have built incredibly complex instruments to map that light, from spacecraft and from the ground in the Chilean Atacama Plateau and at the NSF’s South Pole Station—including the current South Pole Telescope, which has been operating since 2007.

But we need to build a new generation of telescopes in order to answer the biggest questions—like whether our universe began with a burst of expansion at the dawn of time, known as inflation, which would have stretched miniscule quantum-mechanical fluctuations into the initial seeds of all of the structure in the universe today.

CMB-S4 would involve telescopes in two locations: a large telescope and nine smaller ones in Antarctica, and two large telescopes in the mountains of Chile. Each site plays an essential role in achieving the project’s scientific goals.

The telescopes in Chile would conduct a wide survey of the sky, trying to capture a fuller and more precise picture of the cosmic microwave background—and through it, helping us to understand the evolution and distribution of matter in the universe. The project can also look for evidence of “relic” light particles that many theories suggest may have existed in the early universe. CMB-S4 should provide clues on the nature of the mysterious stuff known as dark matter, as well as the dark energy that is causing the expansion of the universe to accelerate.

Meanwhile, the telescopes at NSF’s South Pole Station would take a very deep, sustained look at a smaller part of the sky. “The South Pole is the only location that allows a telescope to look at one place in the sky continuously, because it’s at the pole where the rest of the Earth spins around,” explained Jeff Zivick, deputy project manager for CMB-S4.

This would allow the telescopes to look for evidence of what are called primordial gravitational waves—the ripples in space-time that would have been created if the universe really did explode into being from a space much smaller than a single subatomic particle. These ripples would interact with the cosmic microwave background, creating a distinct but extremely faint signature.

This is an ambitious goal. “In many ways, the theory of inflation looks good, but most of the experimental evidence is somewhat circumstantial,” said Jim Strait, a physicist at Lawrence Berkeley National Laboratory and the project director for CMB-S4. “Finding primordial gravitational waves would be what some people have called ‘the smoking gun’ for inflation.”

Primordial gravitational waves would also be evidence to connect the force of gravity with the laws of quantum mechanics. The mismatch between the two theories, one which applies at the very largest scales in the universe and the other at the very smallest, has been plaguing scientists for decades.

South Pole Telescope lens

CREDIT

University of Chicago


Finalizing designs

The new award from the National Science Foundation will help to fund the design work for the new telescopes and infrastructure at the sites. Going from conceptual design to final design involves analysis, simulations and modeling, and testing components of the telescopes. Although the underlying technology is well understood and has been field-tested, the design work for CMB-S4 is especially important because several of the telescopes will be the most complex of their kind ever built.

CMB-S4 is expected to have nearly 500,000 superconducting detectors, a significant increase over all precursor cosmic microwave background experiments combined. Carlstrom explained that the detectors are already so sensitive that the noise in the measurement is dominated by the background noise of everything else in the sky and atmosphere. The plan, therefore, is to greatly increase the number of measurements and average them to provide a precise measurement of the signal level and greatly reduce the noise.

The increased number of detectors will also require many other components of the project to scale up in size. “For example, we will need to build multiple cryostats, larger than we have ever built before, to effectively cool all these detectors to a temperature near absolute zero,” said Assoc. Prof. Brad Benson, a scientist at UChicago and Fermilab who is leading the effort to design the large camera cryostats for CMB-S4.

The CMB-S4 project is expected to be funded by the National Science Foundation and the U.S. Department of Energy. The National Science Foundation portion of the project is led by the University of Chicago, while the Department of Energy’s portion is led by Lawrence Berkeley National Laboratory.

 

Products made of plastic falsely claimed to be biodegradable are on sale at Brazilian supermarkets


Researchers at the Federal University of São Paulo analyzed allegedly biodegradable plastic items sold by 40 supermarkets and found most to be oxo-degradables, banned in several countries because they contribute significantly to microplastic pollution


Peer-Reviewed Publication

FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO




A famous study published in the journal Science showed that some 6.3 billion metric tons of plastic polymer had been produced and discarded in human history, and that only 9% had been recycled. Twelve percent had been incinerated and the remaining 79% left to rot in landfills or garbage dumps, from which about 10% reached the coast and eventually the sea.

These numbers are from eight years ago. The situation is certainly worse now. Although some countries have announced “zero plastic” policies, factories continue to churn out 400 million tons of plastic per year, and the amount thrown away continues to accumulate.

As a result, contamination by microplastics (fragments less than 5 millimeters in length) has become one of the worst environmental problems in the world, almost as serious as the climate crisis. Microplastics are everywhere – on land, in the sea and in the air. They have even been found in the human body – in the bloodstream, heart and lungs, and in placenta.

“You don’t find microplastics only where you don’t look,” said Ítalo Castro, a researcher and professor at the Federal University of São Paulo’s Institute of Marine Sciences (IMAR-UNIFESP) in Brazil. 

Unfortunately, some attempts at solving the problem are making matters worse, as shown by an investigation of greenwashing led by Castro in which researchers from IMAR-UNIFESP visited 40 supermarkets and analyzed products the manufacturers claimed to be made of biodegradable plastic. The stores belonged to major chains in São Paulo and Rio de Janeiro.

The study sample comprised 49 different products, including plates, cutlery, cups, straws, trays, and other utensils, as well as partyware.  On average, they cost 125% more than the equivalents made from conventional (non-biodegradable) plastic. None of them, including the major brands, met the minimum requirements to be considered genuinely biodegradable.

The results are published in Sustainable Production and Consumption. The first author of the article is Beatriz Barbosa Moreno, a PhD candidate with a scholarship from FAPESP and Castro as thesis advisor. 

“To be considered biodegradable, a product must convert into water [H2O], carbon gas [CO2], methane [CH4] and biomass when discarded into the environment. This should happen relatively quickly, in a few weeks to a year, although there’s no consensus regarding how long it should take. None of the 49 items investigated met this requirement,” Castro said.

More than 90% were made of a class of material that has become known as oxo-degradable, he added. Despite the name, these materials do not degrade in normal environmental conditions. They are polymers of fossil origin additivized with metallic salts, which accelerate oxidation and fragmentation, but the fragments can remain in the environment for decades. Fragmentation does not contribute to degradation. It accelerates the formation of microplastic particles.

“Oxo-degradable plastic is banned in several parts of the world, including the European Union,” Castro said. “In most cases, the ban was due to lack of evidence of biodegradability in real-world conditions, associated with the risk of microplastic formation.”

Regulation

Oxo-degradable plastic has not been banned in Brazil, where it can legally be sold. However, quite apart from the misleading nomenclature, consumers are deceived by many companies that claim their products are certified to technical standards relating to biodegradability, such as ASTM D6954-4 or SPCR 141. “These standards merely provide guidelines for comparing degradation rates and changes in physical properties under controlled laboratory conditions, and they don’t concern the final stages of degradation. In fact, the organizations that produce the standards state on their websites that they must not be used for the purposes of certifying commercial plastic products as biodegradable,” Castro said.

For Castro, the claim that a commercial product is biodegradable when it is nothing of the kind can be considered greenwashing. “When a product that has been shown to harm the environment becomes widely used, official action should be taken to stop it. In Brazil, the Senate is debating a bill (PL 2524/2022) that would ban the use of oxo-degrading or pro-oxidant additives in thermoplastic resins, as well as the manufacturing, importing and marketing of packaging and products made of oxo-degradable plastic,” he said.

If PL 2524/2022 is passed in its present form, Castro explained, it could enable Brazil to engineer a transition to a circular economy in plastics. “This transition is urgently needed,” he said. “IMAR-UNIFESP is based in Santos on the coast of São Paulo state. In Santos, we detected an accumulation of microplastics in mangrove oysters [Crassostrea brasiliana] and brown mussels [Perna perna]. Both filter seawater for food and retain microparticles in their tissue, so that they are considered the gold standard for assessing environmental conditions in areas like this. The levels we detected were among the highest in the world compared with data from more than 100 similar studies conducted in 40 countries” (read more at: agencia.fapesp.br/44773). 

In response to our inquiries, the Brazilian Ministry for the Environment and Climate Change (MMA) said it supports PL 2524/2022 but with certain amendments. “The ministry is favorable to prohibition of oxo-degrading and pro-oxidant additives based on studies showing that microplastic particles are created when these additives cause fragmentation of plastic, which is particularly harmful to the marine environment,” it stressed.

The Brazilian Plastics Industry Association (ABIPLAST) also issued a statement saying it supports a ban on the use of oxo-degradable additives in plastic products. However, it opposes PL 2524/2022, which it sees as “confusing the circular economy with a ban on plastic products and targeting a single class of material”. The text also says that “the circular economy entails a systemic change and therefore requires a macro approach involving all sectors of the manufacturing industry”. 

Meanwhile, another bill – PL 1874/2022 [establishing a National Circular Economy Policy] – includes important provisions regarding strategic resource management, promotion of new business models, investment in research and innovation, and support for the transition to low-carbon technologies by means of the creation of attractive conditions for public and private investment, among others provisions.

The statement sent by ABIPLAST says it “trusts that a serious and accurate science-based debate will promote a constructive dialogue on the correct use of plastic and all the benefits the material has brought society. The plastic sector has taken a leadership role in actions to promote a circular economy of this material, investing in technology, sustainability and innovation”.

About São Paulo Research Foundation (FAPESP)

The São Paulo Research Foundation (FAPESP) is a public institution with the mission of supporting scientific research in all fields of knowledge by awarding scholarships, fellowships and grants to investigators linked with higher education and research institutions in the State of São Paulo, Brazil. FAPESP is aware that the very best research can only be done by working with the best researchers internationally. Therefore, it has established partnerships with funding agencies, higher education, private companies, and research organizations in other countries known for the quality of their research and has been encouraging scientists funded by its grants to further develop their international collaboration. You can learn more about FAPESP at www.fapesp.br/en and visit FAPESP news agency at www.agencia.fapesp.br/en to keep updated with the latest scientific breakthroughs FAPESP helps achieve through its many programs, awards and research centers. You may also subscribe to FAPESP news agency at http://agencia.fapesp.br/subscribe.

 

New quantum effect demonstrated for the first time: Spinaron, a rugby in a ball pit


Peer-Reviewed Publication

UNIVERSITY OF WÜRZBURG

New Quantum Effect Spinaron 

IMAGE: 

THE COBALT ATOM (RED) HAS A MAGNETIC MOMENT (“SPIN,” BLUE ARROW ), WHICH IS CONSTANTLY REORIENTED (FROM SPIN-UP TO SPIN-DOWN) BY AN EXTERNAL MAGNETIC FIELD. AS A RESULT, THE MAGNETIC ATOM EXCITES THE ELECTRONS OF THE COPPER SURFACE (GRAY), CAUSING THEM TO OSCILLATE (CREATING RIPPLES). THIS REVELATION BY THE WÜRZBURG-DRESDEN CLUSTER OF EXCELLENCE CT.QMAT WAS MADE POSSIBLE THANKS TO THE PHYSICISTS’ INCLUSION OF AN IRON TIP (YELLOW) ON THEIR SCANNING TUNNELING MICROSCOPE.

view more 

CREDIT: JUBA BOUAZIZ / ULRICH PUHLFÜRST




Extreme conditions prevail in the Würzburg laboratory of experimental physicists Professor Matthias Bode and Dr. Artem Odobesko. Affiliated with the Cluster of Excellence ct.qmat, a collaboration between JMU Würzburg and TU Dresden, these visionaries are setting new milestones in quantum research. Their latest endeavor is unveiling the spinaron effect. They strategically placed individual cobalt atoms onto a copper surface, brought the temperature down to 1.4 Kelvin (–271.75° Celsius), and then subjected them to a powerful external magnetic field. “The magnet we use costs half a million euros. It’s not something that’s widely available,” explains Bode. Their subsequent analysis yielded unexpected revelations.

Tiny Atom, Massive Effect

“We can see the individual cobalt atoms by usinga scanning tunneling microscope. Each atom has a spin, which can be thought of as a magnetic north or south pole. Measuring it was crucial to our surprising discoveries,” explains Bode. “We vapor-deposited a magnetic cobalt atom onto a non-magnetic copper base, causing the atom to interact with the copper’s electrons. Researching such correlation effects within quantum materials is at the heart of ct.qmat’s mission – a pursuit that promises transformative tech innovations down the road.

Like a Rugby in a Ball Pit

Since the 1960s, solid-state physicists have assumed that the interaction between cobalt and copper can be explained by the Kondo effect, with the different magnetic orientations of the cobalt atom and copper electrons canceling each other out. This leads to a state in which the copper electrons are bound to the cobalt atom, forming what’s termed a “Kondo cloud.” However, Bode and his team delved deeper in their laboratory. And they validated an alternate theory proposed in 2020 by theorist Samir Lounis from research institute Forschungszentrum Jülich.

By harnessing the power of an intense external magnetic field and using an iron tip in the scanning tunneling microscope, the Würzburg physicists managed to determine the magnetic orientation of the cobalt’s spin. This spin isn’t rigid, but switches permanently back and forth, i.e. from “spin-up” (positive) to “spin-down” (negative), and vice versa. This switching excites the copper electrons, a phenomenon called the spinaron effect. Bode elucidates it with a vivid analogy: “Because of the constant change in spin alignment, the state of the cobalt atom can be compared to a rugby ball. When a rugby ball spins continuously in a ball pit, the surrounding balls are displaced in a wave-like manner. That’s precisely what we observed – the copper electrons started oscillating in response and bonded with the cobalt atom.” Bode continues: “This combination of the cobalt atom’s changing magnetization and the copper electrons bound to it is the spinaron predicted by our Jülich colleague.”

The first experimental validation of the spinaron effect, courtesy of the Würzburg team, casts doubt on the Kondo effect. Until now, it was considered the universal model to explain the interaction between magnetic atoms and electrons in quantum materials such as the cobalt-copper duo. Bode quips: “Time to pencil in a significant asterisk in those physics textbooks!”

Spinaron and Spintronics

In the spinaron effect, the cobalt atom remains in perpetual motion, maintaining its magnetic essence despite its interaction with the electrons. In the Kondo effect, on the other hand, the magnetic moment is neutralized by its the electron interactions. “Our discovery is important for understanding the physics of magnetic moments on metal surfaces,” declares Bode. Peeking into the future, such phenomena could pave the way for magnetic information encoding and transportation in new types of electronic devices. Dubbed “spintronics,” this could make IT greener and more energy-efficient.

However, Bode tempers expectations when talking about the practicality of this cobalt-copper combination. “We’ve essentially manipulated individual atoms at ultra-low temperatures on a pristine surface in ultra-high vacuum. That’s infeasible for cell phones. While the correlation effect is a watershed moment in fundamental research for understanding the behavior of matter, I can’t build an actual switch from it.”

Currently, Würzburg quantum physicist Artem Odobesko and Jülich theorist Samir Lounis are concentrating on a large-scale review of the numerous publications that have described the Kondo effect in various combinations of materials since the 1960s. “We suspect that many might actually be describing the spinaron effect,” says Odobesko, adding: “If so, we’ll rewrite the history of theoretical quantum physics.”

Cluster of Excellence ct.qmat

The Cluster of Excellence ct.qmat – Complexity and Topology in Quantum Matter has been jointly run by Julius-Maximilians-Universität Würzburg and Technische Universität Dresden since 2019. Nearly 400 scientists from more than thirty countries and four continents study topological quantum materials that reveal surprising phenomena under extreme conditions such as ultra-low temperatures, high pressure, or strong magnetic fields. ct.qmat is funded through the German Excellence Strategy of the Federal and State Governments and is the only Cluster of Excellence in Germany to be based in two different federal states.

Disclaimer: AAA

 

New phone case provides workaround for inaccessible touch screens


Touch screens are everywhere but not built for everyone. A new device could help bridge that gap, helping users access ticket kiosks, restaurant menus and more

Reports and Proceedings

UNIVERSITY OF MICHIGAN

 


 

Images  //  Video

A new smartphone case could soon enable folks with visual impairments, tremors and spasms to use touch screens independently. 

 

Developed at the University of Michigan, BrushLens could help users perceive, locate and tap buttons and keys on the touch screen menus now ubiquitous in restaurant kiosks, ATM machines and other public terminals.

 

"So many technologies around us require some assumptions about users' abilities, but seemingly intuitive interactions can actually be challenging for people," said Chen Liang, a doctoral student in computer science and engineering. 

 

Liang is the first author of a paper accepted by the Association for Computing Machinery Symposium on User Interface Software and Technology in San Francisco. He will demo BrushLens at 7 p.m. Pacific Time Oct. 30 and present the paper at 9 a.m. Pacific Time Oct. 31.

 

"People have to be able to operate these inaccessible touch screens in the world. Our goal is to make that technology accessible to everyone," Liang said.

 

Liang works in the lab of Anhong Guo, U-M assistant professor of computer science and engineering. Guo led the development of BrushLens with Alanson Sample, an associate professor in the same department.

 

Users can comb through a touch screen interface by holding a phone connected to BrushLens against a touch screen and dragging the phone across the screen. The phone sees what's on the screen with its camera then reads the options aloud by harnessing the phone's built-in screen readers. Users indicate their menu choice through screen readers or an enlarged, easy-to-tap button in the BrushLens app.

 

When given a target, BrushLens divides the screen into a grid, then guides the user's hand toward the section of the screen containing their menu choice by saying the coordinates of both the target and device. Once those coordinates overlap, pushbuttons or autoclickers on the underside of the phone case tap the screen for the user, depending on the model.

 

"The user doesn't have to precisely locate where the button is and perform the touch gesture," Liang said.

 

Ten study participants, six with visual impairments and four with tremors or spasms, tested the hardware and app. 

 

"As a blind person, touch screens are pretty much inaccessible to me unless I have some help or I can plug headphones into the kiosk," said study participant Sam Rau. "Somebody else has to order for you, or they have to help you out with it. I don't want to be in a situation where I always have to rely on the kindness of others."

 

It took some time for Rau to figure BrushLens out, but once he became familiar with the device, he was excited by the tool's potential.

 

"I thought about myself going into a Panera Bread and being able to order from the kiosk," Rau said. "I could actually see myself accomplishing something that I otherwise thought impossible."

 

Likewise, BrushLens worked as intended for users whose tremors or spasms cause them to make unwanted selections on touch screens. For one participant with cerebral palsy, BrushLens improved their accuracy by nearly 74%.

 

The inventors of BrushLens recently applied for a patent with the help of Innovation Partnerships, U-M's central hub for research commercialization. The team hopes to bring the product to users as an affordable phone accessory. 

 

"The parts that we used are relatively affordable. Each clicker costs only $1," Liang said. "The whole device is definitely under $50, and that's a conservative estimate."

 

The team plans to further streamline their design so that it easily fits in a pocket. Offloading the battery and processing to the phone, for example, could make the design cheaper and less bulky.

 

"It doesn't have to be much more complex than a TV remote," said study co-author Yasha Iravantchi, a doctoral student in computer science and engineering.

 

The companion app could also be improved by allowing users to directly interface with it via voice commands, Liang said.

 

Participants were enrolled in the trial study with the help of the Disability Network, the University of Michigan Council for Disability Concerns and the James Weiland research group in the U-M Department of Biomedical Engineering. The research was funded by a Google Research Scholar Award.

 

Study: BrushLens: Hardware Interaction Proxies for Accessible Touchscreen Interface Actuation

 

Renewed support for high power laser facilities will benefit discovery science and inertial fusion energy research at SLAC


Grant and Award Announcement

DOE/SLAC NATIONAL ACCELERATOR LABORATORY

SLAC Matter in Extreme Conditions Hutch 

IMAGE: 

MATTER IN EXTREME CONDITIONS (MEC) HUTCH 6, LOCATED IN THE LCLS FAR EXPERIMENTAL HALL.

view more 

CREDIT: JACQUELINE RAMSEYER ORRELL/SLAC NATIONAL ACCELERATOR LABORATORY




Research and technology development for plasma physics and fusion energy at the Department of Energy’s SLAC National Accelerator Laboratory just got a boost from a LaserNetUS award. 

In total, the DOE’s Office of Science awarded $28.5 million to advance discovery science and inertial fusion energy, including a three-year grant for the development and operations of the Matter in Extreme Conditions (MEC) instrument at SLAC’s Linac Coherent Light Source (LCLS).

MEC has been home to high intensity laser experiments since 2012, and joined the LaserNetUS network as a founding member in 2018. The new DOE funding puts an additional focus on building the science and technologies needed to develop inertial fusion energy. 

Last year’s breakthrough at the National Ignition Facility brought into view the potential of inertial fusion energy, in which a net source of power can be created by heating and compressing pellets of fuel with powerful lasers. Since then, scientists in the field came together to identify the most important basic research needs for realizing this potential future energy source, according to Gilliss Dyer, MEC department head and lead scientist. The DOE Office of Science workshop resulted in the IFE Basic Research Needs Report on the topic.

“For the first time, MEC will emphasize inertial fusion energy priority research through development of capabilities and configurations, outreach through LaserNetUS, and the allocation of dedicated facility access for such research,” Dyer said. “The goal is to deliver up to 50% of MEC’s beam time for experiments relevant to inertial fusion energy.”

The activities of the network at MEC and other facilities will also help lay the groundwork for a major upgrade to MEC, Dyer said, by developing a new generation of diagnostics for hotter, denser plasmas.

Beyond inertial fusion energy science, high-intensity lasers have a broad range of applications in basic research, manufacturing and medicine. For example, they can be used to generate high energy particle beams for cancer therapy and to detect trace elements in the environment. SLAC’s MEC instrument has also enabled unique studies of extremely hot, dense matter found at the centers of stars and giant planets. The instrument’s optical lasers – one used to study hot, dynamic plasma and another to drive shockwaves in materials to study high pressures – combined with the world-leading X-ray laser beam of LCLS have produced numerous scientific results published in major journals.

LaserNetUS was established by the Fusion Energy Sciences program of the DOE’s Office of Science. It provides researchers from the U.S. and abroad open and free access to the most powerful lasers at universities and national laboratories throughout the U.S. and Canada. The network currently has more than 1,200 members. LaserNetUS management was centralized at SLAC in 2021 by appointing SLAC scientist Chandra Breanne Curry to be the consortium’s first coordinator.

 

Scientists call for a major investigation into Congo Basin 


Meeting Announcement

UNIVERSITY OF LEEDS

 

Leading researchers have launched a major scientific initiative to investigate - and help protect - the fragile Congo Basin Forest region in central Africa, one of the world’s most important but little understood ecosystems.  

They say the Congo Basin Science Initiative will transform the understanding of the Congo Basin, an area of 240 million hectares of contiguous tropical forests that absorb a vast quantity of carbon, which helps to moderate the impact of global climate change. 

The Initiative has been launched at the Three Basins Summit, a major gathering to discuss the world’s three large tropical forest regions being held in Brazzaville, Republic of the Congo. 

At the summit, leading researchers - including Professor Stephen Lewis, a prominent expert on the Congo forests from the University of Leeds and University College London – highlighted the stark difference between what scientists know about the Congo Basin and what they know about the Amazon Forest in South America, which has been the subject of intense scientific enquiry through what is known as the Large-Scale Biosphere-Atmosphere Experiment or LBA.  

Involving 120 projects and 1700 researchers, the decade-long scheme in the Amazon has revealed the critical role that the Amazon Rainforest plays in regulating the Earth’s climate. It has also helped train local scientists and has put Brazil at the forefront of rainforest science.  

The researchers meeting in Brazzaville aim to replicate that approach with the creation of the Congo basin Science Initiative.  

Professor Raphael Tshimanga, a leading expert on the Congo Basin based at the University of Kinshasa in the Democratic Republic of the Congo and one of the leaders of the project, said: "The Congo basin is a vast and important area that straddles central Africa, the world’s second green lung after the Amazon.  

“If we can replicate what has been achieved through investing in research in the Amazon, we will be in a much stronger position to understand the threats to this unique African ecosystem not only from climate change but also from deforestation and pollution from mining and oil exploration.”  

Key scientific questions 

Scientists have drawn up key scientific questions that need to be answered to assess the health of the Congo Basin forests. 

Professor Lewis, who helped to develop the science plan for the Initiative with scientific colleagues in the Congo region, said: “In the Amazon, scientists have uncovered a tipping point where beyond a certain level of deforestation and climate change will lead to a mass die-back of the southern and eastern part of the Amazon.  

“In the Congo Basin, scientists do not know if there is a tipping point, because we do not yet have the data to investigate this.” 

Underinvestment in Congo Basin science 

The scientists argue that there has been “severe underinvestment” in the science to understand the Congo Basin and many barriers to researchers to lead high-level studies.  

In recent years, only 11% of international funding for forest protection and sustainable management in tropical areas has been channelled to projects in the Congo Basin, whereas 34% went to the Amazon and 55% to southeast Asia.  

The lack of investment meant that in the latest IPCC global climate assessment, the Congo Basin was one of only two location in the world without sufficient data needed to assess past trends in extreme heatwaves.

The meeting in Brazzaville is calling for $100 million to be invested in a ten-year science programme focussed on the Basin region, with a further $100 million to give PhD training to scientists from the Congo region. Once qualified, those scientists will be able to lead and co-ordinate complex studies.  

Funding would have to come from international donors, UN agencies and philanthropists. The researchers say with international spending on research and development hitting $2.4 trillion in 2020, the funding needed to understand and protect the world’s second tropical forest is relatively modest.  

Professor Lewis added: “Central Africa needs more scientists who can monitor the forests, rivers and climate of the region. Central Africa needs more scientists who can advocate for evidence-based policy, so that these countries can develop and become prosperous, but without the mass-scale destruction of nature that has occurred in places like the UK.”

Congo Basin  

The Congo Basin forests are a global biodiversity hotspot and home to elephants, gorillas, chimpanzees and bonobos.  Rainfall from the region is recycled by the forests and transported beyond central Africa, feeding rivers that are used by 300 million people as far afield as Ethiopia and Egypt.  

There are major discoveries being made in the Congo Basin, with scientist recently mapping the world’s largest tropical peatland, and many species that are new to science, with recent finds including a new species of gecko, an air-breathing catfish, and a new species of coffee.

But experts fear as the climate gets warmer, that process could see the undisturbed Congo basin’s forests switch from being one of the world’s biggest absorbers of carbon to an emitter of carbon.