Friday, November 15, 2024

 

Experts urge complex systems approach to assess A.I. risks



The social context and its complex interactions must be considered and public engagement must be encouraged



Complexity Science Hub

An illustration of the complex system of AI-infused social networks, and a model of amplification of biases in networks 

image: 

A: an illustration of the complex system of AI-infused social networks. B: a model of amplification of biases in networks due to feedback between algorithm and human decision over time. B (i), Simulations are performed on a synthetic network of size 2000 with 30% nodes being minorities and minority homophily and majority homophily are set to 0.7 each. In homophilic networks minorities are less presented in the top ranks compared to their size, 30%. In each time step, 5 random links are removed from the network. They are then rewired, with higher-ranked nodes having a greater probability of being chosen as targets for the new connection. The ranking is recalculated and this process of link rewiring is repeated over many feedback loops. The fraction of minorities in the upper ranks (eg: the top 10%) reduces over time. The results are averaged over 10 independent experiments. In part (ii), we can observe that the fraction of minorities in the top 10% goes from 23.4% down to 21.8% in 60 iterations. Part (iii) measures demographic parity where demonstrates how far in the rank 30% of minorities are present as we expect, all else equal. At the beginning of the process, we achieve 30% representation of the minorities when we arrive at the top 56% of the nodes. At the end of the process, we need to include the top 71% to get this fair representation.

view more 

Credit: Complexity Science Hub




[Vienna, November 13, 2024] — With artificial intelligence increasingly permeating every aspect of our lives, experts are becoming more and more concerned about its dangers. In some cases, the risks are pressing, in others they won't emerge until many months or even years from now. Scientists point out in The Royal Society’s journal that a coherent approach to understanding these threats is still elusive. They call for a complex systems perspective to better assess and mitigate these risks, particularly in light of long-term uncertainties and complex interactions between A.I. and society.

"Understanding the risks of A.I. requires recognizing the intricate interplay between technology and society. It's about navigating the complex, co-evolving systems that shape our decisions and behaviors,” says Fariba Karimi, co-author of the article. Karimi leads the research team on Algorithmic Fairness at the Complexity Science Hub (CSH) and is professor of Social Data Science at TU Graz.

“We should not only discuss what technologies to deploy and how, but also how to adapt the social context to capitalize on positive possibilities. A.I. possibilities and risks should likely be taken into account in debates about, for instance, economic policy,” adds CSH scientist Dániel Kondor, first author of the study.

Broader and Long-Term Risks

Current risk assessment frameworks often focus on immediate, specific harms, such as bias and safety concerns, according to the authors of the article published in Philosophical Transactions A. “These frameworks often overlook broader, long-term systemic risks that could emerge from the widespread deployment of A.I. technologies and their interaction with the social context they are used,” says Kondor.

“In this paper, we tried to balance the short-term perspectives on algorithms with long-term views of how these technologies affect society. It's about making sense of both the immediate and systemic consequences of A.I.," adds Kondor.

What Happens in Real Life

As a case study to illustrate the potential risks of A.I. technologies, the scientists discuss how a predictive algorithm was used during the Covid-19 pandemic in the UK for school exams. The new solution was “presumed to be more objective and thus fairer [than asking teachers to predict their students’ performance], relying on a statistical analysis of students’ performance in previous years,” according to the study. 

However, when the algorithm was put into practice, several issues emerged. “Once the grading algorithm was applied, inequities became glaringly obvious,” observes Valerie Hafez, an independent researcher and study co-author. “Pupils from disadvantaged communities bore the brunt of the futile effort to counter grading inflation, but even overall, 40% of students received lower marks than they would have reasonably expected.”

Hafez reports that many responses in the consultation report indicate that the risk perceived as significant by teachers—the long-term effect of grading lower than deserved—was different from the risk perceived by the designers of the algorithm. The latter were concerned about grade inflation, the resulting pressure on higher education, and a lack of trust in students’ actual abilities.

The Scale and the Scope

This case demonstrates several important issues that arise when deploying large-scale algorithmic solutions, emphasize the scientists. “One thing we believe one should be attentive to is the scale—and scope—because algorithms scale: they travel well from one context to the next, even though these contexts may be vastly different. The original context of creation does not simply disappear, rather it is superimposed on all these other contexts,” explains Hafez.

"Long-term risks are not the linear combination of short-term risks. They can escalate exponentially over time. However, with computational models and simulations, we can provide practical insights to better assess these dynamic risks,” adds Karimi. 

Computational Models – and Public Participation

This is one of the directions proposed by the scientists for understanding and evaluating risk associated with A.I. technologies, both in the short- and long-term. “Computational models—like those assessing the effect of A.I. on minority representation in social networks—can demonstrate how biases in A.I. systems lead to feedback loops that reinforce societal inequalities,” explains Kondor. Such models can be used to simulate potential risks, offering insights that are difficult to glean from traditional assessment methods.

In addition, the study's authors emphasize the importance of involving laypeople and experts from various fields in the risk assessment process. Competency groups—small, heterogeneous teams that bring together varied perspectives—can be a key tool for fostering democratic participation and ensuring that risk assessments are informed by those most affected by AI technologies.

“A more general issue is the promotion of social resilience, which will help A.I.-related debates and decision-making function better and avoid pitfalls. In turn, social resilience may depend on many questions unrelated (or at least not directly related) to artificial intelligence,” ponders Kondor. Increasing participatory forms of decision-making can be one important component of raising resilience.

“I think that once you begin to see A.I. systems as sociotechnical, you cannot separate the people affected by the A.I. systems from the ‘technical’ aspects. Separating them from the A.I. system takes away their possibility to shape the infrastructures of classification imposed on them, denying affected persons the power to share in creating worlds attenuated to their needs,” says Hafez, who’s an A.I. policy officer at the Austrian Federal Chancellery.


About the Study

The study “Complex systems perspective in assessing risks in A.I.,” by Dániel Kondor, Valerie Hafez, Sudhang Shankar, Rania Wazir, and Fariba Karimi was published in Philosophical Transactions A and is available online.


About CSH

The Complexity Science Hub (CSH) is Europe’s research center for the study of complex systems. We derive meaning from data from a range of disciplines —  economics, medicine, ecology, and the social sciences — as a basis for actionable solutions for a better world. Established in 2015, we have grown to over 70 researchers, driven by the increasing demand to gain a genuine understanding of the networks that underlie society, from healthcare to supply chains. Through our complexity science approaches linking physics, mathematics, and computational modeling with data and network science, we develop the capacity to address today’s and tomorrow’s challenges.

AI needs to work on its conversation game



Researchers discover why AI does a poor job of knowing when to chime in on a conversation



Tufts University




When you have a conversation today, notice the natural points when the exchange leaves open the opportunity for the other person to chime in. If their timing is off, they might be taken as overly aggressive, too timid, or just plain awkward.

The back-and-forth is the social element to the exchange of information that occurs in a conversation, and while humans do this naturally—with some exceptions—AI language systems are universally bad at it.

Linguistics and computer science researchers at Tufts University have now discovered some of the root causes of this shortfall in AI conversational skills and point to possible ways to make them better conversational partners.

When humans interact verbally, for the most part they avoid speaking simultaneously, taking turns to speak and listen. Each person evaluates many input cues to determine what linguists call “transition relevant places” or TRPs. TRPs occur often in a conversation. Many times we will take a pass and let the speaker continue. Other times we will use the TRP to take our turn and share our thoughts.

JP de Ruiter, professor of psychology and computer science, says that for a long time it was thought that the “paraverbal” information in conversations—the intonations, lengthening of words and phrases, pauses, and some visual cues—were the most important signals for identifying a TRP.

“That helps a little bit,” says de Ruiter, “but if you take out the words and just give people the prosody—the melody and rhythm of speech that comes through as if you were talking through a sock—they can no longer detect appropriate TRPs.”

Do the reverse and just provide the linguistic content in a monotone speech, and study subjects will find most of the same TRPs they would find in natural speech.

“What we now know is that the most important cue for taking turns in conversation is the language content itself. The pauses and other cues don’t matter that much,” says de Ruiter.

AI is great at detecting patterns in content, but when de Ruiter, graduate student Muhammad Umair, and research assistant professor of computer science Vasanth Sarathy tested transcribed conversations against a large language model AI, the AI was not able to detect appropriate TRPs anywhere near the capability of humans.

The reason stems from what the AI is trained on. Large language models, including the most advanced ones such as ChatGPT, have been trained on a vast dataset of written content from the internet—Wikipedia entries, online discussion groups, company websites, news sites—just about everything. What is missing from that dataset is any significant amount of transcribed spoken conversational language, which is unscripted, uses simpler vocabulary and shorter sentences, and is structured differently than written language.

AI was not “raised” on conversation, so it does not have the ability to model or engage in conversation in a more natural, human-like manner.

The researchers thought that it might be possible to take a large language model trained on written content and fine-tune it with additional training on a smaller set of conversational content so it can engage more naturally in a novel conversation. When they tried this, they found that there were still some limitations to replicating human-like conversation.

The researchers caution that there may be a fundamental barrier to AI carrying on a natural conversation. “We are assuming that these large language models can understand the content correctly. That may not be the case,” said Sarathy. “They’re predicting the next word based on superficial statistical correlations, but turn taking involves drawing from context much deeper into the conversation.”

“It’s possible that the limitations can be overcome by pre-training large language models on a larger body of naturally occurring spoken language,” said Umair, whose PhD research focuses on human-robot interactions and is the lead author on the studies. “Although we have released a novel training dataset that helps AI identify opportunities for speech in naturally occurring dialogue, collecting such data at a scale required to train today’s AI models remains a significant challenge. There is just not nearly as much conversational recordings and transcripts available compared to written content on the internet.”

The study results were presented at the Empirical Methods in Natural Language Processing (EMNLP) 2024 conference, held in Miami from November 11 to 17 and posted on Arxiv.

 

 SPAGYRIC HERBALISM



Bioengineered yeast mass produces herbal medicine




Kobe University
Hasunuma-Artepillin_C-Yeast 

image: 

The yeast Komagataella phaffii is well-suited to produce components for the class of chemicals artepillin C belongs to, can be grown at high cell densities, and does not produce alcohol, which limits cell growth.

view more 

Credit: BAMBA Takahiro




Herbal medicine is difficult to produce on an industrial scale. A team of Kobe University bioengineers manipulated the cellular machinery in a species of yeast so that one such molecule can now be produced in a fermenter at unprecedented concentrations. The achievement also points the way to the microbial production of other plant-derived compounds.

Herbal medicinal products offer many beneficial health effects, but they are often unsuitable for mass production. One example is artepillin C, which has antimicrobial, anti-inflammatory, antioxidant, and anticancer action, but is only available as a bee culture product. The Kobe University bioengineer HASUNUMA Tomohisa says: “To obtain a high-yield and low-cost supply, it is desirable to produce it in bioengineered microorganisms which can be grown in fermenters.” This, however, comes with its own technical challenges.

To begin with, one needs to identify the enzyme, the molecular machine, the plant uses to manufacture a specific product. “The plant enzyme that’s key to artepillin C production had only recently been discovered by YAZAKI Kazufumi at Kyoto University. He asked us whether we can use it to produce the compound in microorganisms due to our experience with microbial production,” says Hasunuma. The team then tried to introduce the gene coding for the enzyme into the yeast Komagataella phaffii, which compared to brewer’s yeast is better able to produce components for this class of chemicals, can be grown at higher cell densities, and does not produce alcohol, which limits cell growth.

In the journal ACS Synthetic Biology, they now report that their bioengineered yeast produced ten times as much artepillin C as could be achieved before. They accomplished this feat by carefully tuning key steps along the molecular production line of artepillin C. Hasunuma adds: “Another interesting aspect is that artepillin C is not excreted into the growth medium readily and tends to accumulate inside the cell. It was therefore necessary to grow the yeast cells in our fermenters to high densities, which we achieved by removing some of the mutations introduced for technical reasons but that stand in the way of the organism’s dense growth.”

The Kobe University bioengineer already has ideas how to further improve the production. One approach will be to further raise the efficiency of the final and critical chemical step by modifying the responsible enzyme or by increasing the pool of precursor chemicals. Another approach may be to find a way of transporting artepillin C out of the cell. “If we can modify a transporter, a molecular structure that transports chemicals in and out of cells, such that it exports the product into the medium while keeping the precursors in the cell, we could achieve even higher yields,” Hasunuma says. 

The implications of this study, however, go beyond the production of this particular compound. Hasunuma explains, “Since thousands of compounds with a very similar chemical structure exist naturally, there is the very real possibility that the knowledge gained from the production of artepillin C can be applied to the microbial production of other plant-derived compounds.”

This research was funded by the Japan Society for the Promotion of Science (grant 23H04967), the RIKEN Cluster for Science, Technology and Innovation Hub and the Japan Science and Technology Agency (grant JPMJGX23B4). It was conducted in collaboration with researchers from Kyoto University and the RIKEN Center for Sustainable Resource Science.

Kobe University is a national university with roots dating back to the Kobe Commercial School founded in 1902. It is now one of Japan’s leading comprehensive research universities with nearly 16,000 students and nearly 1,700 faculty in 10 faculties and schools and 15 graduate schools. Combining the social and natural sciences to cultivate leaders with an interdisciplinary perspective, Kobe University creates knowledge and fosters innovation to address society’s challenges.


Through introducing plant enzymes that can catalyze key steps along the molecular production line of artepillin C into yeast cells, and by tuning the balance of precursor molecules, the team around Kobe University bioengineer HASUNUMA Tomohisa produced artepillin C in fermenters at unprecedented concentrations.

 

Grandparents help grandkids in many ways – but the reverse may be true too, poll suggests



Less loneliness and better mental health seen among those who see or care for grandchildren often




Michigan Medicine - University of Michigan

Grandparents and loneliness 

image: 

Key findings about grandparenting and loneliness among older adults from the National Poll on Healthy Aging

view more 

Credit: University of Michigan




As many Americans prepare to gather with their families for the holidays, a new poll shows the importance of grandchildren in grandparents’ lives.

The poll also suggests that having grandchildren and seeing them regularly may have a link to older adults’ mental health and risk of loneliness.

Although the poll can’t show cause and effect, the findings suggest a need to study the role of grandparenting in older adults’ lives, as part of a broader effort to address social isolation.

At the same time, the poll found that many grandparents support their grandchildren under 18 in some way, from covering major expenses to providing childcare or babysitting regularly, or even daily.

The data come from the National Poll on Healthy Aging, based at the University of Michigan’s Institute for Healthcare Policy and Innovation. The poll is supported by AARP and Michigan Medicine, U-M’s academic medical center.

In all, the poll shows, 60% of adults aged 50 and over have at least one grandchild, including step-grandchildren, adopted grandchildren and great-grandchildren. That includes the 27% who said they have five or more grandchildren.

Those over age 65 were much more likely than those in their 50s and early 60s to say they have one or more grandchildren, at 76% versus 46%.

People with at least one grandchild were more likely than those without grandchildren to say they hardly ever feel isolated. In all, 72% of those with grandchildren say they hardly ever feel isolated, compared with 62% of those without grandchildren. People without grandchildren were also more likely to say their mental health is fair or poor compared with those who have grandchildren (13% versus 9%).

“For many older people, becoming a grandparent is a major milestone in their lives. Our findings show there are many dimensions to grandparenting, and possible positive effects of grandparenting, some of which may not be widely recognized,” said Kate Bauer, Ph.D., an associate professor of Nutritional Sciences in the U-M School of Public Health who worked with the poll team.

“With growing attention by policymakers to the role of social interaction in the well-being of people over age 50, and also the struggles of older adults who are raising children under 18, we hope our findings will inform those policy discussions,” said Bauer.

Poll director Jeffrey Kullgren, M.D., M.P.H., M.S., an associate professor of internal medicine at U-M and a physician at the VA Ann Arbor Healthcare System, says, “Health care providers should consider asking their older patients whether they are active in their grandchildren’s lives, and perhaps encourage more involvement among those who are struggling with loneliness or depression, even if they live far apart and need to connect virtually when they can’t be together.”

Caring for children

Nearly half (49%) of those who have grandchildren under age 18 provide care for them at least once every few months.  

In all, 20% of those with grandchildren under 18 care for one or more grandchild at least once a week, with 8% providing daily or near-daily care. Ten percent of grandparents who are age 50 to 64 reported providing daily or near-daily care, compared with 6% of those age 65 and over.

Older adults who identified as Hispanic were more likely to say they take care of a grandchild under 18 every day or nearly so, at 15% compared with 7% of non-Hispanic white, and 9% of non-Hispanic Black older adults who have grandchildren under age 18. With the high cost and limited availability of childcare in the U.S., grandparents who provide regular care for their grandchildren are giving their families a valuable resource, Bauer notes.

Seeing grandchildren

The poll asked older adults who have grandchildren under age 18 how often they see them. In all, 18% of grandparents see their grandchild or grandchildren every day or nearly every day, an additional 23% see them at least once a week and 23% see them once or twice a month, while 36% said they only see them every few months or less.

In general, grandparents who see their grandkids more often were less likely to say they feel isolated. Overall, 78% of those who see grandchildren under 18 every day or nearly every day said they hardly ever feel isolated, compared with 65% of those who see their grandchildren every few months or less. Also, 73% of those who see their grandchildren at least weekly or once or twice a month said they hardly ever feel isolated.

There was a similar trend when the poll team looked at those who reported hardly ever feeling a lack of companionship. In all, 57% of grandparents who see their grandchildren only every few months reported feeling this way, compared with around 70% of those who see them more frequently.

Those grandparents who see their grandchildren only every few months or less were more likely to say their mental health is fair or poor (13%) compared with those who see them at least once a week (4%) or once or twice a month (8%). There was no difference for physical health status.

Eating with or cooking with grandchildren

Bauer’s research focuses on social factors related to children’s eating behaviors and weight. The poll asked older adults who have grandchildren ages 1 to 17 whether they had engaged in food-related activities with their grandchildren in the past month.

In all, 61% of these older adults said they had shared at least one meal with a grandchild or grandchildren in the past month, and 47% said they had prepared food for them, while an equal percentage said they had bought food for their grandchildren. And a sizable percentage – 36% -- said they had baked or cooked with their grandchildren in the past month.

“Eating, and especially cooking, with grandchildren can be an opportunity for older adults to make important social and cultural connections, such as passing down knowledge and recipes,” said Bauer. “Given how many grandparents are frequently engaging with their grandchildren around food and eating, it is important that they relay positive and healthy messages about nutrition and body size.”

Paying for grandchildren’s expenses

Overall, nearly one-third (32%) of older adults who have grandchildren under age 18 say they have helped provide financial support to them in some way in the past year.

This includes 23% who helped with day-to-day expenses such as clothes, meals and groceries; 10% who paid for educational expenses; and 10% who provided support for other big expenses such as summer camps, sports and daycare.

Living with grandchildren

Among all adults aged 50 and older who have grandchildren, 6% live in the same home as at least one of their grandchildren. This percentage was higher among Black older adults (9%) and Hispanic older adults (9%) compared with white older adults (5%), and among older adults in their 50s and early 60s (8%) compared with those over age 65 (4%).

Also, among those who have grandchildren, 3% said they have primary custody or primary parental responsibility of a grandchild aged 17 or younger. The percentage was higher (6%) among grandparents who are age 50 to 64, compared to those age 65 and older (1%).

Bauer notes the grandparents who take on full-time roles caring for grandchildren – forming what are sometimes called “grandfamilies” – can play a critical role in providing stability during challenging times in children’s lives. More research is needed on their role.

Michigan findings

Thanks to funding from the Michigan Health Endowment Fund, the poll team also looked at grandparenting among 1,174 Michiganders aged 50 and over.

In all, 63% of Michiganders in this age group have at least one grandchild, including 51% of those in their 50s and early 60s and 74% of those age 65 and over.

Older adults living in northern, central and southwestern Michigan were more likely to be grandparents than those in the southeastern part of the state. 

When it came to providing daily or nearly daily care for grandchildren under age 18, Michiganders reported doing so at about the same percentages as those in the rest of the country. But 25% of older Black Michiganders said they provide daily or near-daily care to a grandchild or grandchildren, higher than the national percentage, while the rate of daily or near-daily grandchild care among white Michiganders was around 6%, similar to the national figure.

In addition, 30% of Black Michigan grandparents said they see their grandchildren every day or nearly every day compared with 15% of white Michigan grandparents. Also, older Michigan women were more likely to say they see their grandchildren every day or nearly every day compared with men (21% versus 11%).

Grandparents living in the southeastern region were more likely to be part of ‘grandfamilies’ by having custody of at least one grandchild, with 5% saying they do, compared with 3% or less elsewhere in the state.

The poll findings come from a nationally representative survey conducted by NORC at the University of Chicago for IHPI and administered online and via phone in August 2024 among 3,486 adults ages 50 - 94 across the U.S. The Michigan sample included 1,174 respondents ages 50 - 94. The samples were subsequently weighted to reflect the U.S. and Michigan populations.

 

 

New report: Cyberthreats are growing – so are patents for technology to combat them



Patent data analysis highlights the leading companies in cybersecurity innovations



Digital Science

IFI CLAIMS Technology Spotlight: Cybersecurity 

image: 

In its latest Technology Spotlight, IFI CLAIMS Patent Services has analyzed the top companies and their inventions to help safeguard cybersecurity.

view more 

Credit: Digital Science / IFI CLAIMS




At a time when public trust has been undermined by strings of cyberattacks and cyber spying, IFI CLAIMS Patent Services – the industry’s most trusted patent data provider – has analyzed the top companies and their inventions to help safeguard cybersecurity.

Key points:

  1. Microsoft (with 133 patent applications)
  2. IBM (122)
  3. Intel (121)
  4. Knowbe4 (108)
  5. Darktrace Holdings (74)
  • Patent grants for cybersecurity have grown by about 11% year-on-year for the past 10 years. See the growth here.
  • The Top 5 Cyber Defense Technology Classifications attracting the most attention for patent applications and grants:
  1. Network architectures
  2. Security arrangements for protecting computers
  3. Pattern recognition for signal processing
  4. Cryptographic mechanisms
  5. Machine learning
  • Over the past five years, the most cited U.S. cybersecurity patent belongs to global insurance and financial company Aon – for a 2019 grant of its assessment system for cybersecurity risk.

See the full analysis at the IFI CLAIMS website: https://www.ificlaims.com/news/view/spotlight-cybersecurity.htm


About IFI CLAIMS Patent Services

IFI CLAIMS Patent Services uses a proprietary data architecture to produce the industry’s most accurate global patent database. The CLAIMS Direct platform allows for the easy integration of applications, other data sets, and analysis software. Headquartered in New Haven, Conn., with a satellite office in Barcelona, Spain, IFI CLAIMS is part of Digital Science, a digital research technology company based in London. For more information, visit www.ificlaims.com and follow IFI on LinkedIn.

About Digital Science

Digital Science is an AI-focused technology company providing innovative solutions to complex challenges faced by researchers, universities, funders, industry and publishers. We work in partnership to advance global research for the benefit of society. Through our brands – Altmetric, Dimensions, Figshare, IFI CLAIMS Patent Services, metaphacts, OntoChem, Overleaf, ReadCube, Scismic, Symplectic, and Writefull – we believe when we solve problems together, we drive progress for all. Visit www.digital-science.com and follow @digitalsci on X or on LinkedIn.


Media contacts:

For media inquiries and interviews, please contact Lily Iacurci, Marketing Manager, IFI CLAIMS Patent Services: lily.iacurci@ificlaims.com

David Ellis, Press, PR & Social Manager, Digital Science: Mobile +61 447 783 023, d.ellis@digital-science.com

 

Giving robots superhuman vision using radio signals




University of Pennsylvania School of Engineering and Applied Science
Seeing Through Radio Waves 

image: 

Freddy Liu, Haowen Lai and Mingmin Zhao, from left, setting up a robot equipped with PanoRadar for a test run.

view more 

Credit: Sylvia Zhang




In the race to develop robust perception systems for robots, one persistent challenge has been operating in bad weather and harsh conditions. For example, traditional, light-based vision sensors such as cameras or LiDAR (Light Detection And Ranging) fail in heavy smoke and fog. 

However, nature has shown that vision doesn't have to be constrained by light’s limitations — many organisms have evolved ways to perceive their environment without relying on light. Bats navigate using the echoes of sound waves, while sharks hunt by sensing electrical fields from their prey's movements.

Radio waves, whose wavelengths are orders of magnitude longer than light waves, can better penetrate smoke and fog, and can even see through certain materials — all capabilities beyond human vision. Yet robots have traditionally relied on a limited toolbox: they either use cameras and LiDAR, which provide detailed images but fail in challenging conditions, or traditional radar, which can see through walls and other occlusions but produces crude, low-resolution images.

Now, researchers from the University of Pennsylvania School of Engineering and Applied Science (Penn Engineering) have developed PanoRadar, a new tool to give robots superhuman vision by transforming simple radio waves into detailed, 3D views of the environment. 

"Our initial question was whether we could combine the best of both sensing modalities," says Mingmin Zhao, Assistant Professor in Computer and Information Science. "The robustness of radio signals, which is resilient to fog and other challenging conditions, and the high resolution of visual sensors."

In a paper to be presented at the 2024 International Conference on Mobile Computing and Networking (MobiCom), Zhao and his team from the Wireless, Audio, Vision, and Electronics for Sensing (WAVES) Lab and the Penn Research In Embedded Computing and Integrated Systems Engineering (PRECISE) Center, including doctoral student Haowen Lai, recent master’s graduate Gaoxiang Luo and undergraduate research assistant Yifei (Freddy) Liu, describe how PanoRadar leverages radio waves and artificial intelligence (AI) to let robots navigate even the most challenging environments, like smoke-filled buildings or foggy roads.

PanoRadar is a sensor that operates like a lighthouse that sweeps its beam in a circle to scan the entire horizon. The system consists of a rotating vertical array of antennas that scans its surroundings. As they rotate, these antennas send out radio waves and listen for their reflections from the environment, much like how a lighthouse's beam reveals the presence of ships and coastal features. 

Thanks to the power of AI, PanoRadar goes beyond this simple scanning strategy. Unlike a lighthouse that simply illuminates different areas as it rotates, PanoRadar cleverly combines measurements from all rotation angles to enhance its imaging resolution. While the sensor itself is only a fraction of the cost of typically expensive LiDAR systems, this rotation strategy creates a dense array of virtual measurement points, which allows PanoRadar to achieve imaging resolution comparable to LiDAR. "The key innovation is in how we process these radio wave measurements," explains Zhao. "Our signal processing and machine learning algorithms are able to extract rich 3D information from the environment."

One of the biggest challenges Zhao's team faced was developing algorithms to maintain high-resolution imaging while the robot moves. "To achieve LiDAR-comparable resolution with radio signals, we needed to combine measurements from many different positions with sub-millimeter accuracy," explains Lai, the lead author of the paper. "This becomes particularly challenging when the robot is moving, as even small motion errors can significantly impact the imaging quality."

Another challenge the team tackled was teaching their system to understand what it sees. "Indoor environments have consistent patterns and geometries," says Luo. "We leveraged these patterns to help our AI system interpret the radar signals, similar to how humans learn to make sense of what they see." During the training process, the machine learning model relied on LiDAR data to check its understanding against reality and was able to continue to improve itself.

"Our field tests across different buildings showed how radio sensing can excel where traditional sensors struggle," says Liu. "The system maintains precise tracking through smoke and can even map spaces with glass walls." This is because radio waves aren't easily blocked by airborne particles, and the system can even "capture" things that LiDAR can't, like glass surfaces. PanoRadar's high resolution also means it can accurately detect people, a critical feature for applications like autonomous vehicles and rescue missions in hazardous environments.

Looking ahead, the team plans to explore how PanoRadar could work alongside other sensing technologies like cameras and LiDAR, creating more robust, multi-modal perception systems for robots. The team is also expanding their tests to include various robotic platforms and autonomous vehicles. "For high-stakes tasks, having multiple ways of sensing the environment is crucial," says Zhao. "Each sensor has its strengths and weaknesses, and by combining them intelligently, we can create robots that are better equipped to handle real-world challenges."

This study was conducted at the University of Pennsylvania School of Engineering and Applied Science and supported by a faculty startup fund.

PanoRadar uses radio waves and AI to achieve superhuman vision. 

Credit

Sylvia Zhang