Monday, November 27, 2023

 

Collaboration between women helps close the gender gap in ice core science


Analyzing the evolution of women's participation in ice core research


Peer-Reviewed Publication

UNIVERSITY OF ALBERTA




A Perspective article published today in Nature Geoscience tackles the longstanding issue of gender representation in science, focusing on the field of ice core science. Prior work has shown that despite progress toward gender parity over the past fifty years1, women continue to be significantly underrepresented within the discipline of Earth sciences2 and receive disproportionately fewer opportunities for recognition, such as invited talks, awards, and nominations3. This lack of opportunity can have long-term negative impacts on women’s careers. To help address these persistent gender gaps, the study evaluates patterns related to women’s publication in ice core science over the past fifty years. The study was co-led by Bess Koffman of Colby College, USA, and Matthew Osman of Cambridge University, UK, and coauthored by Alison Criscitiello and Sofia Guest, both of the University of Alberta, Canada.

To assess relationships among gender, publication rate, and impact of coauthor networks, the study evaluates a comprehensive, global dataset of abstracts representing published work in ice core science spanning 1969 to 2021 in this historically male-dominated discipline. The Perspective article shows that the inferred gender gap in ice core science has declined from roughly 10:90% women:men in the 1970’s to ~30:70% in the past decade. Contrasting with prior work across the sciences, the authors find that women’s and men’s coauthor networks have remained similarly sized and been similarly cited through time. This finding may reflect the high degree of international cooperation and the large collaborative teams that are typical of the field of ice core science.

Importantly, the gender makeup of coauthors differs substantially for man vs. woman-led studies. Strikingly, within the past decade, woman-led studies have contained on average 20% more women coauthors than man-led studies, a difference found to be even greater in earlier decades. Moreover, since the early 2000s, the analysis shows that women have out-performed by about 8% their estimated proportion within the ice core community in terms of publishing first-authored papers. The new analysis by Koffman, Osman, Criscitiello and Guest suggests that senior women in particular catalyze women’s participation in publishing, and that collaboration between women can help close gender gaps in science.

References cited:

1 Bernard, R. E. & Cooperdock, E. H. G. No progress on diversity in 40 years. Nature Geoscience 11, 292-295, doi:10.1038/s41561-018-0116-6 (2018).

2 Holmes, M. A., O'Connell, S., Frey, C. & Ongley, L. Gender imbalance in US geoscience academia. Nature Geoscience 1, 79-82 (2008).

3 Ford, H. L., Brick, C., Blaufuss, K. & Dekens, P. S. Gender inequity in speaking opportunities at the American Geophysical Union Fall Meeting. Nature Communications 9, doi:10.1038/s41467-018-03809-5 (2018).

4 Pico, T., Bierman, P., Doyle, K. & Richardson, S. First Authorship Gender Gap in the Geosciences. Earth and Space Science 7, doi:10.1029/2020EA001203 (2020).

 

Stanford Medicine study reveals why we value things more when they cost us more


Neural basis for “sunk cost” pride


Peer-Reviewed Publication

STANFORD MEDICINE




Ahab hunting down Moby Dick. Wile E. Coyote chasing the Road Runner. Learning Latin. Walking over hot coals. Standing in a long line for boba tea or entrance to a small, overpriced clothing retail store. Forking up for luxury nonsense.

What do these activities have in common? They’re all examples of the overvaluation of what economists call “sunk costs”: the price you’ve already irretrievably paid in time, money, effort, suffering or any combination of them for an item, an experience or a sense of self-esteem.  

It’s a phenomenon we all recognize. It affects our behavior in ways that can be irrational. But we do it.

Here’s my story: My glacial-blue ’64 stick-shift Volvo station wagon had red, white and blue Colorado U.S. Bicentennial plates and a phalanx of three small bowling trophies for hood ornaments (I called it “the Bowlvo”). It was falling apart like a piece of overcooked chicken. (One day, I was shooting down Highway 25 in Colorado when the hood flew up in my face. Another time, as I was frantically downshifting into second gear while driving home at my usual unsafe speed on a winding mountain road, the shift lever came off in my hand.) I would have gone to the ends of the earth, or at least the end of my rope, to keep it in running condition. Or failing that, just to keep it.

For mysterious reasons, we are hardwired to value something more if we’ve put a lot of sweat equity — what we had to do to get (or in my case keep) that reward — into it. Neuroscientists are trying to figure out why we do that.

Shared stupidity

“We make fallacious decisions based on what we’ve invested in something, even if the probability of actually gaining an objective advantage from it is zero,” said assistant professor of psychiatry and behavioral science Neir Eshel, MD, PhD. “And it’s not just us. This has been shown in animals across the animal kingdom.”

OK — all higher animals are hardwired to make dumb decisions. But why?

Blame dopamine: the “do it again, do it some more” brain chemical that’s been much talked about in connection with pleasure, learning and habit formation.

There’s a difference between wanting something and liking it, said Eshel, who focuses on how the brain motivates behavior. “You can want something very, very much even though you don’t even like it very much. Or vice versa.”

A few years ago, Eshel, his then-postdoctoral adviser Rob Malenka, MD, PhD, the Nancy Friend Pritzker Professor in Psychiatry and the Behavioral Sciences, and some Stanford Medicine colleagues began conducting experiments to learn more about wanting versus liking and what, if any, role dopamine secretion in the brain plays in each of these states.

“We looked at how much an animal likes something — how much it will consume if that something is cost-free — and how much it wants something — how much that animal’s consumption is affected by the cost of getting it,” Eshel said.

The results of that experimentation are in a paper to be published Nov. 27 in Neuron.

The dopamine connection

In the course of their study, they came up with a possible neural mechanism for the longstanding psychological observation that we value rewards more if we worked harder for them: Dopamine release in the striatum, it turns out, is greatly influenced by the effort put forth to gain a reward.

“Now we may have found the neural basis for sunk cost,” Eshel said. “Dopamine could explain it.”

In their study of mice, the researchers defined “cost” as either the number of times the mice had to poke their noses into a hole in a box (anywhere between just once and nearly 50 times) or risk incurring mild to moderate foot shocks to get access to a “reward”: either sugar water or instant direct stimulation of dopamine release in two centers in a structure in the middle of the brain called the striatum. These centers are well known for their role in motivation and movement (motion), their abundance of dopamine receptors, and their innervation by dopamine-secreting tracts originating in regions deeper in the brain. And for their involvement in learning, habit formation and addiction.

The researchers first determined the test animals’ “cost-free consumption”: how much a mouse will consume until satiation in a cost-free situation (all it had to do was stick its nose in the hole, and bingo!). That told the investigators how much the mouse “liked” something.

Then, in steps, they raised the cost of acquisition by increasing the number of nose-pokes, or the intensity of electric shocks to a mouse’s feet, required to get the reward.

The researchers likewise methodically varied the amounts of reward (whether sucrose or direct stimulation of dopamine release in the striatum) animals got for a given amount of persistence or discomfort.

Dopamine release in mice’s striatum was assessed as soon as each reward was earned.

Not too surprisingly, striatal dopamine release was influenced by the size of the prize. But, the scientific team learned, raising the reward’s cost also triggered greater dopamine release in the striatum: There was a biochemical basis for the concept of sunk cost.

Sunk cost and survival

How does this make any evolutionary sense? To an economist, valuing something because of sunk costs is aberrant decision making.

One idea, Eshel suggested: “In an environment with limited resources (as most are), when we typically get rewarded only after really hard work, we may need high dopamine secretion to get us to do it again.”

“Because dopamine reinforces previous behaviors, it may reflect sunk costs,” he said. “The dopamine release we saw may enable you to pay those steep costs in the future.”

Maybe Eshel could have tested me instead. I know a thing or two about sunk costs.

I still miss the Bowlvo.              

The research was funded by the National Institutes of Health (grants K08MH123791 and P50DA04201), the Brain & Behavior Research Foundation, the Burroughs Wellcome Fund, the Simons Foundation, and the Stanford Wu Tsai Neurosciences Institute.

# # #

 

About Stanford Medicine

Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit med.stanford.edu.

 

Deoxygenation levels similar to today’s played a major role in marine extinctions during major past climate change event


Peer-Reviewed Publication

TRINITY COLLEGE DUBLIN

Carnduff cores 

IMAGE: 

SAMPLING OF THE CARNDUFF CORES (HERE STUDIED), WHICH WERE DRILLED IN THE LARNE BASIN, NORTHERN IRELAND.

view more 

CREDIT: PROF. MICHA RUHL, TRINITY COLLEGE DUBLIN





Scientists have made a surprising discovery that sheds new light on the role that oceanic deoxygenation (anoxia) played in one of the most devastating extinction events in Earth’s history. Their finding has implications for current day ecosystems – and serves as a warning that marine environments are likely more fragile than apparent.

New research, published today in leading international journal Nature Geosciences, suggests that oceanic anoxia played an important role in ecosystem disruption and extinctions in marine environments during the Triassic–Jurassic mass extinction, a major extinction event that occurred around 200 million years ago. 

Surprisingly however, the study shows that the global extent of euxinia (an extreme form of de-oxygenated conditions) was similar to the present day.

Earth’s history has been marked by a handful of major mass extinctions, during which global ecosystems collapsed and species went extinct. All past extinction events appear to have coincided with global climatic and environmental perturbance that commonly led to ocean deoxygenation. Because of this, oceanic anoxia has been proposed as a likely cause of marine extinctions at those times, with the assumption that the more widespread occurrence of deoxygenation would have led to a larger extinction event.

Using chemical data from ancient mudstone deposits obtained from drill-cores in Northern Ireland and Germany, an international research team led by scientists from Royal Holloway (UK), and including scientists from Trinity College Dublin’s School of Natural Sciences (Ireland) as well as from Utrecht University (Netherlands), was able to link two key aspects associated with the Triassic–Jurassic mass extinction.

The team discovered that pulses in deoxygenation in shallow marine environments along the margins of the European continent at that time directly coincided with increased extinction levels in those places.

On further investigation – and more importantly – the team also found that the global extent of extreme deoxygenation was rather limited, and similar to the present day. 

Micha Ruhl, Assistant Professor in Trinity’s School of Natural Sciences, and research-team member, said: 

“Scientists have long suspected that ocean deoxygenation plays an important role in the disturbance of marine ecosystems, which can lead to the extinction of species in marine environments. The study of past time intervals of extreme environmental change indeed shows this to be the case, which teaches us important lessons about potential tipping points in local, as well as global ecosystems in response to climatic forcing.

“Crucially however, the current findings show that even when the global extent of deoxygenation is similar to the present day, the local development of anoxic conditions and subsequent locally increased extinction rates can cascade in widespread or global ecosystem collapse and extinctions, even in areas where deoxygenation did not occur. 

“It shows that global marine ecosystems become vulnerable, even when only local environments along the edges of the continents are disturbed. Understanding such processes is of paramount importance for assessing present day ecosystem stability, and associated food supply, especially in a world where marine deoxygenation is projected to significantly increase in response to global warming and increased nutrient run-off from continents.”

The study of past global change events, such as at the transition between the Triassic and Jurassic periods, allows scientists to disentangle the consequences of global climatic and environmental change and constrain fundamental Earth system processes that control tipping points in Earth’s ecosystems.

 

A core sample of ~201 million year old sediments obtained from the Carnduff-2 core, drilled in the Larne Basin (Northern Ireland), showing the shell of an animal that lived on the seabed shortly after the Triassic–Jurassic global mass extinction.

CREDIT

Prof. Micha Ruhl, Trinity College Dublin

Professor Micha Ruhl in the lab.

JOURNAL

 

Wind and solar projects can profit from bitcoin mining


Peer-Reviewed Publication

CORNELL UNIVERSITY




ITHACA, N.Y. – Bitcoin mining is often perceived as environmentally damaging because it uses huge amounts of electricity to power its intensive computing needs, but a new study demonstrates how wind and solar projects can profit from bitcoin mining during the precommercial development phase — when a wind or solar farm is generating electricity, but has not yet been integrated into the grid.

The findings suggest some developers could recoup millions of dollars to potentially invest in future renewable energy projects.

The study, “From Mining to Mitigation: How Bitcoin Can Support Renewable Energy Development and Climate Action,” was published in the journal ACS Sustainable Chemistry & Engineering and is authored by Cornell University doctoral student Apoorv Lal and Fengqi You, professor in energy systems engineering at Cornell. Jesse Zhu, professor from the Western University of Canada, also contributed to the research.

Texas emerged from the analysis as the state with the most potential, with 32 planned renewable projects that could generate combined profits of $47 million using bitcoin mining during precommercial operations. Projects in California produced the second highest profits in the study, while Colorado, Illinois, Iowa, Nevada and Virginia had fewer installations but still show profitability.

“Profitability of a mining system hinges on periods of steady energy availability since renewable energy sources can vary significantly,” said You. “Therefore, it is important to site the mining farm strategically to maximize productivity.”

As an example, You pointed to California, Colorado, Nevada and Virginia as states where solar installations were the only type of renewable energy project that proved profitable in generating bitcoin during the precommercial phase.

The researchers suggest several policy recommendations that could help improve the economic feasibility of renewable energy projects and reduce carbon emissions. One is to provide economic rewards for environmentally responsible cryptocurrency mining, such as carbon credits for avoided emissions.

“These rewards can act as an incentive for miners to adopt clean energy sources, which can lead to combined positive effects on climate change mitigation, improved renewable power capacity, and additional profits during precommercial operation of wind or solar farms,” Lal said. “We also recommend policies that encourage cryptocurrency-mining operations to return some of their profits back into infrastructure development. This would help create a self-sustaining cycle for renewable energy expansion.”

While the study’s authors acknowledge other aspects of cryptocurrency mining still have environmental costs, for example, metal depletion and hardware that becomes obsolete within a few years, they said the results indicate that there are ways to mitigate some of the environmental costs of cryptocurrency mining and foster investments in renewable energy.

The research was partially funded by the National Science Foundation.

For additional information, see this Cornell Chronicle story.

-30-

 

New method uses crowdsourced feedback to help train robots


Human Guided Exploration (HuGE) enables AI agents to learn quickly with some help from humans, even if the humans make mistakes.


Reports and Proceedings

MASSACHUSETTS INSTITUTE OF TECHNOLOGY




To teach an AI agent a new task, like how to open a kitchen cabinet, researchers often use reinforcement learning — a trial-and-error process where the agent is rewarded for taking actions that get it closer to the goal.

In many instances, a human expert must carefully design a reward function, which is an incentive mechanism that gives the agent motivation to explore. The human expert must iteratively update that reward function as the agent explores and tries different actions. This can be time-consuming, inefficient, and difficult to scale up, especially when the task is complex and involves many steps.

Researchers from MIT, Harvard University, and the University of Washington have developed a new reinforcement learning approach that doesn’t rely on an expertly designed reward function. Instead, it leverages crowdsourced feedback, gathered from many nonexpert users, to guide the agent as it learns to reach its goal. 

While some other methods also attempt to utilize nonexpert feedback, this new approach enables the AI agent to learn more quickly, despite the fact that data crowdsourced from users are often full of errors. These noisy data might cause other methods to fail. 

In addition, this new approach allows feedback to be gathered asynchronously, so nonexpert users around the world can contribute to teaching the agent.

“One of the most time-consuming and challenging parts in designing a robotic agent today is engineering the reward function. Today reward functions are designed by expert researchers — a paradigm that is not scalable if we want to teach our robots many different tasks. Our work proposes a way to scale robot learning by crowdsourcing the design of reward function and by making it possible for nonexperts to provide useful feedback,” says Pulkit Agrawal, an assistant professor in the MIT Department of Electrical Engineering and Computer Science (EECS) who leads the Improbable AI Lab in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

In the future, this method could help a robot learn to perform specific tasks in a user’s home quickly, without the owner needing to show the robot physical examples of each task. The robot could explore on its own, with crowdsourced nonexpert feedback guiding its exploration.

“In our method, the reward function guides the agent to what it should explore, instead of telling it exactly what it should do to complete the task. So, even if the human supervision is somewhat inaccurate and noisy, the agent is still able to explore, which helps it learn much better,” explains lead author Marcel Torne ’23, a research assistant in the Improbable AI Lab.

Torne is joined on the paper by his MIT advisor, Agrawal; senior author Abhishek Gupta, assistant professor at the University of Washington; as well as others at the University of Washington and MIT. The research will be presented at the Conference on Neural Information Processing Systems next month.

Noisy feedback

One way to gather user feedback for reinforcement learning is to show a user two photos of states achieved by the agent, and then ask that user which state is closer to a goal. For instance, perhaps a robot’s goal is to open a kitchen cabinet. One image might show that the robot opened the cabinet, while the second might show that it opened the microwave. A user would pick the photo of the “better” state.

Some previous approaches try to use this crowdsourced, binary feedback to optimize a reward function that the agent would use to learn the task. However, because nonexperts are likely to make mistakes, the reward function can become very noisy, so the agent might get stuck and never reach its goal.

“Basically, the agent would take the reward function too seriously. It would try to match the reward function perfectly. So, instead of directly optimizing over the reward function, we just use it to tell the robot which areas it should be exploring,” Torne says.

He and his collaborators decoupled the process into two separate parts, each directed by its own algorithm. They call their new reinforcement learning method HuGE (Human Guided Exploration). 

On one side, a goal selector algorithm is continuously updated with crowdsourced human feedback. The feedback is not used as a reward function, but rather to guide the agent’s exploration. In a sense, the nonexpert users drop breadcrumbs that incrementally lead the agent toward its goal.

On the other side, the agent explores on its own, in a self-supervised manner guided by the goal selector. It collects images or videos of actions that it tries, which are then sent to humans and used to update the goal selector. 

This narrows down the area for the agent to explore, leading it to more promising areas that are closer to its goal. But if there is no feedback, or if feedback takes a while to arrive, the agent will keep learning on its own, albeit in a slower manner. This enables feedback to be gathered infrequently and asynchronously.

“The exploration loop can keep going autonomously, because it is just going to explore and learn new things. And then when you get some better signal, it is going to explore in more concrete ways. You can just keep them turning at their own pace,” adds Torne.

And because the feedback is just gently guiding the agent’s behavior, it will eventually learn to complete the task even if users provide incorrect answers. 

Faster learning

The researchers tested this method on a number of simulated and real-world tasks. In simulation, they used HuGE to effectively learn tasks with long sequences of actions, such as stacking blocks in a particular order or navigating a large maze. 

In real-world tests, they utilized HuGE to train robotic arms to draw the letter “U” and pick and place objects. For these tests, they crowdsourced data from 109 nonexpert users in 13 different countries spanning three continents. 

In real-world and simulated experiments, HuGE helped agents learn to achieve the goal faster than other methods. 

The researchers also found that data crowdsourced from nonexperts yielded better performance than synthetic data, which were produced and labeled by the researchers. For nonexpert users, labeling 30 images or videos took fewer than two minutes.

“This makes it very promising in terms of being able to scale up this method,” Torne adds.

In a related paper, which the researchers presented at the recent Conference on Robot Learning, they enhanced HuGE so an AI agent can learn to perform the task, and then autonomously reset the environment to continue learning. For instance, if the agent learns to open a cabinet, the method also guides the agent to close the cabinet.

“Now we can have it learn completely autonomously without needing human resets,” he says.

The researchers also emphasize that, in this and other learning approaches, it is critical to ensure that AI agents are aligned with human values.

In the future, they want to continue refining HuGE so the agent can learn from other forms of communication, such as natural language and physical interactions with the robot. They are also interested in applying this method to teach multiple agents at once.

This research is funded, in part, by the MIT-IBM Watson AI Lab.

###

Written by Adam Zewe, MIT News

Paper: "Breadcrumbs to the Goal: Goal-Conditioned Exploration from Human-in-the-Loop Feedback"

https://arxiv.org/pdf/2307.11049.pdf

 

Study shows price discounts on healthful foods like vegetables and zero-calorie beverages lead to an increase in consumption of these foods


Peer-Reviewed Publication

THE MOUNT SINAI HOSPITAL / MOUNT SINAI SCHOOL OF MEDICINE

Geliebter 

IMAGE: 

HEALTHY VEGETABLES.

view more 

CREDIT: MOUNT SINAI HEALTH SYSTEM




Dietary food intake has a major influence on health indicators, including Body Mass Index (BMI), blood pressure, serum cholesterol and glucose. Previous research has shown that decisions to purchase specific food items are primarily based on taste and cost. In the United States, only 12 percent and 10 percent of adults meet fruit and vegetable intake recommendations, respectively. Since affordability of food items is a limiting factor for meeting fruit and vegetable intake guidelines, researchers hypothesize that more affordable low energy-dense foods like fruits and vegetables, which are relatively more expensive than less healthy high energy-dense foods, could lead to their increased intake.

To observe the effects of a multi-level (30 percent, 15 percent and zero percent) randomized discount on fruits, vegetables and non-caloric beverages on changes in dietary intake, a team of researchers from the Icahn School of Medicine at Mount Sinai conducted a randomized, controlled trial that involved the recruitment of primary household shoppers from several New York City supermarkets. The trial comprised an 8-week baseline, a 32-week intervention, and a 16-week follow-up. 24-hour dietary recalls were conducted during the baseline period and before the intervention midpoint. In-person clinical measures (including body weight, percent body fat, blood pressure, fasting serum glucose, hemoglobin A1C, and serum blood lipids) were analyzed from week 8 (end of baseline) and 24 (midpoint). This report is from an interim analysis up to the intervention midpoint at Week 24, as the study is ongoing.

The study results, published November 22 in PLOS One, showed that the 30 percent discount led to significantly increased consumption of both vegetables and diet soda. The 15 percent discount group showed a non-significant increase in consumption of diet soda but no change for vegetables. Thus, a discount of 15 percent may not be adequate to influence vegetable intake. Unlike vegetable intake, there was no effect of the discounts on fruit intake during the initial study period up to the midpoint. Diet soda intake was inversely correlated with regular soda intake for those who received the 30 percent discount on diet soda. There were no significant differences in the clinical measures, including body weight, relative to the discounts.

“Our findings that significant discounts on health foods can lead to an increase in consumption of these foods offer a suggestion for public health officials and policymakers to consider increasing access to nutritious foods and beverages,” said senior author Alan Geliebter, PhD, Professor of Psychiatry at Icahn Mount Sinai and an expert in obesity, food intake and eating disorders. “The results highlight a potential avenue for promoting healthier dietary intake behaviors and we hope this information will be used by policy makers to consider subsidizing fruits and vegetables via modification of the Farm Bill.”

To learn more about this study, please visit:
PLOS ONE

 

Schrum and Sleeter unpacking the history of higher education in the United States


Grant and Award Announcement

GEORGE MASON UNIVERSITY




Kelly Schrum, Professor, Higher Education Program; Affiliated Faculty, History and Art History, and Nathan Sleeter, Research Assistant Professor, History and Art History, Roy Rosenzweig Center for History and New Media (RRCHNM), received $220,000 from the National Endowment for the Humanities for the project: "Unpacking the History of Higher Education in the United States." 

This funding began in Oct. 2023 and will end in late Dec. 2024. 

The history of higher education is central to understanding its present and future, especially for students in Higher Education and Student Affairs (HESA) programs who will lead colleges and universities for decades to come. Project Co-Directors,  Dr. Kelly Schrum  (Higher Education Program), and Dr. Nate Sleeter (Roy Rosenzweig Center for History and New Media) at George Mason University, will offer a four-week institute, Unpacking the History of Higher Education in the United States, in summer 2024, designed to improve history of higher education courses nationally and to deepen humanities engagement among future higher education leaders. Funded by the  National Endowment for the Humanities  (NEH), this institute will enable participants to engage deeply with history content and history as a discipline. Participants will explore topics throughout the history of higher education and create digital teaching resources. The project will result in a robust Open Educational Resource (OER) on the history of higher education designed to facilitate teaching nationwide. This project grew out of a collaboration funded by  4-VA  in  2020  and again in  2021. 

###

 

SwRI-led PUNCH mission advances toward 2025 launch


Observatory integration begins in SwRI’s new Spacecraft and Payload Processing Facility


Business Announcement

SOUTHWEST RESEARCH INSTITUTE

PUNCH WFI 

IMAGE: 

ON NOVEMBER 17, 2023, THE POLARIMETER TO UNIFY THE CORONA AND HELIOSPHERE (PUNCH) MISSION ACHIEVED AN IMPORTANT MILESTONE, PASSING ITS INTERNAL SYSTEM INTEGRATION REVIEW, CLEARING THE MISSION TO START INTEGRATING THE FOUR OBSERVATORIES. THREE OF THE FOUR PUNCH SPACECRAFT WILL INCLUDE SWRI-DEVELOPED WIDE FIELD IMAGERS (PICTURED) OPTIMIZED TO IMAGE THE SOLAR WIND. THE DARK BAFFLES IN THE TOP RECESS ALLOW THE INSTRUMENT TO IMAGE OBJECTS OVER A THOUSAND TIMES FAINTER THAN THE MILKY WAY.

view more 

CREDIT: SOUTHWEST RESEARCH INSTITUTE




SAN ANTONIO — November 27, 2023 —On November 17, 2023, the Polarimeter to UNify the Corona and Heliosphere (PUNCH) mission achieved an important milestone, passing its internal system integration review and clearing the mission to start integrating its four observatories. Southwest Research Institute leads PUNCH, a NASA Small Explorer (SMEX) mission that will integrate understanding of the Sun’s corona, the outer atmosphere visible during total solar eclipses, with the “solar wind” that fills and defines the solar system. SwRI is also building the spacecraft and three of its five instruments.

“This was an internal review, but it is a huge milestone for us,” said PUNCH Principal Investigator Dr. Craig DeForest of SwRI’s Solar System Science and Exploration Division. “It marks the transition from assembling subsystems to integrating complete observatories that are ready to launch into space.”

PUNCH is a constellation of four small suitcase-sized satellites scheduled to launch in 2025 into a polar orbit formation. One satellite carries a coronagraph, the Narrow Field Imager, that images the Sun’s corona continuously. The other three each carry SwRI-developed Wide Field Imagers (WFIs), optimized to image the solar wind. These four instruments work together to form a field of view large enough to capture a quarter of the sky, centered on the Sun.

In addition to the primary instruments, PUNCH includes a student-built instrument, the Student Energetic Activity Monitor (STEAM). The instrument is a spectrometer that captures the X-ray spectrum of the Sun, providing valuable diagnostic data to help the PUNCH team understand corona heating as well as the initial acceleration of the solar wind away from the surface of the Sun.

“Just as in astronomy when a new telescope like Hubble opens a new window to the universe, PUNCH’s four satellites are going to visualize a mysterious process, imaging how the solar corona transitions into the solar wind,” said Dr. James L. Burch, senior vice president of SwRI’s Space Sector. “As an authority in heliophysics research, SwRI is not only leading the science of this mission but also building the spacecraft and three of the four sensors designed to let us see, for the first time, the birth of the solar wind.”

SwRI’s new Spacecraft and Payload Processing Facility has received the first three PUNCH instruments for integration. The Narrow Field Imager from the Naval Research Laboratory and the STEAM X-ray spectrometer instrument from the Colorado Space Grant Consortium arrived in October. The first of three Wide Field Imagers has also been delivered, with the remaining two undergoing final integration and test.

The Polarimeter to UNify the Corona and Heliosphere (PUNCH) mission achieved an important milestone, passing its internal system integration review, clearing spacecraft integration to begin in SwRI’s new Spacecraft and Payload Processing Facility. The team developed engineering models (EMs shown in background) to finalize integration processes and test procedures. EMs continue to support high-fidelity flight software testing and flight procedure/script validation (shown in foreground).

CREDIT

Southwest Research Institute

“The team really came together and completed a tremendous amount of verification work to get us ready for this review,” said PUNCH Project Manager Ronnie Killough. “This work will pay huge dividends as we prepare for our next major milestone, the pre-environmental review in early 2024.  That will clear the observatories for a battery of tests prior to spaceflight.”

The SMEX program provides frequent flight opportunities for world-class scientific investigations from space using innovative, efficient approaches within the heliophysics and astrophysics science areas. In addition to leading the PUNCH science mission, SwRI will operate the four spacecraft. The PUNCH team includes the U.S. Naval Research Laboratory, which is building the Narrow Field Imager, and RAL Space in Oxfordshire, England, which is providing detector systems for four visible-light cameras.

For more information, visit  https://www.swri.org/heliophysics.

 

UCF receives $1.5million NSF grant to improve energy efficiency of wireless communications


The award, provided through the National Science Foundation’s Addressing Systems Challenges through Engineering Teams program, aims to address problems surrounding engineering systems and networks


Grant and Award Announcement

UNIVERSITY OF CENTRAL FLORIDA




Wireless devices consume more than just the hours users spend scrolling through social media, streaming podcasts and TV shows, and playing games. The networks used to connect these devices also consume a large amount of energy – up to a few thousand terawatt-hours annually worldwide, which is enough to power 70,000,000 homes for one year.

UCF researcher Kenle Chen aims to enhance the energy efficiency of these systems with the support of a $1.5 million grant from the National Science Foundation’s Addressing Systems Challenges through Engineering Teams (ASCENT) program. ASCENT launched in 2020 with the goal of developing novel solutions to problems surrounding engineering systems and networks. It also promotes collaborations among researchers across three electrical engineering clusters: Communications, Circuits and Sensing Systems; Electronics, Photonics and Magnetic Devices; and Energy, Power, Control and Networks.

Chen, an assistant professor in the Department of Electrical and Computer Engineering, has teamed up with researchers from Purdue University and the University of California, Santa Barbara, to complete the project. They are one of seven teams selected for the ASCENT award this year.

”I feel very excited about receiving this competitive award that will provide us with a four-year funding support to perform this highly collaborative research,” Chen says. “Our project well aligns with the 2023 ASCENT program theme, Enhanced Energy Efficiency for Climate Change Mitigation, which will engender not only scientific advances but also broadened societal impacts.”

The team plans to incorporate advanced semiconductor technologies and artificial intelligence into a millimeter-wave radio system. This system widens the bandwidth of wireless communications for each user but also increases energy consumption.

To address this, Chen and his research group will develop advanced millimeter-wave power amplification circuits using highly efficient wide-bandgap semiconductors, which will be further integrated into a millimeter-wave radio system based on an antenna array. These circuits are also designed with ‘self-healing’ reconfigurability against variations in operational environments and system conditions.

Researchers from Purdue will lend their expertise to the semiconductor portion of the research. They will focus on the packaging of the technology and the assembly of silicon and non-silicon materials in microchips and antennas through a process called heterogeneous integration. They’ll also find solutions to keep the high-powered semiconductor devices cool in extreme temperatures.

UC Santa Barbara researchers will collaborate with Chen on the AI portion of the project, allowing for the autonomous control of advanced power amplification circuits. They will develop the algorithm and framework and test and train the AI for a faster processing time. This use of AI in wireless systems is fairly new in the industry, Chen says.

“We’re in the very early stages of integrating AI in this capacity,” Chen says. “In the future, we need to dynamically adjust the control settings of radio-frequency circuits because in many emerging wireless radio systems like 5G and 6G, the high complexity and compactness of the system make the operational environment subject to constant fluctuations.”

Chen also plans to integrate his research discoveries from the project into his course curriculum and to involve graduate students in the work in his lab. Although the four-year project will take time to develop, it could ultimately leave a lasting impact on the industry, he says.

“If the proposed new technologies can be successfully and realistically applied, we can save a huge amount of energy in wireless communications, possibly in the order of tens to hundreds of terawatt-hours per year,” Chen says. “Every industry is expected be carbon neutral by 2050, so we need to move progressively toward that target over time.”

Chen joined the UCF Department of Electrical and Computer Engineering in 2018 as an assistant professor. He earned his doctoral degree in electrical engineering from Purdue University in 2013 and is a 2023 recipient of the NSF CAREER award.

Writer: Marisa Ramiccio, UCF College of Engineering and Computer Science