It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Thursday, July 13, 2023
Friday, 14 July 2023 – SHARK AWARENESS DAY
Discover white sharks and more in 3D! Cutting-edge, interactive shark and ray displays bring the ocean to life
New interactive models on the Save Our Seas Foundation’s (SOSF’s) World of Sharks website, and for the SOSF Shark Education Centre’s technology for young learners, bring to life the evolution and adaptions of sharks and rays – in 3D!
For release on Friday, 14 July 2023 – SHARK AWARENESS DAY
Have you ever wondered how many kinds of sharks there are? Which is the biggest shark or the fastest? For these answers and lots more, the Save Our Seas Foundation’s (SOSF’s) World of Sharks website is the one-stop shop for shark information. Designed to provide scientifically accurate information in an engaging format, World of Sharks is where you can find infographics, podcast episodes, species cards and topic pages covering everything you’ve ever wanted to know about sharks and rays.
“We wanted World of Sharks to be the ultimate shark FAQ – created to answer all the questions people want to ask about sharks and rays,” says SOSF CEO Dr James Lea. “Through engaging and accessible content, we hope to grow a repository of fascinating shark facts that people can trust.”
And now, with this latest addition, the website will host interactive 3D white shark and manta ray models designed by the Digital Life Project at the University of Massachusetts (UMASS) in collaboration with the SOSF.
“I was really wanting to create something 3D and interactive, where visitors to the World of Sharks can explore in an engaging way that highlights the unique physiology and evolution of sharks and rays and demystifies their unique adaptations,” explains Jade Schultz, content manager for the SOSF.
The Digital Life team, led by Professor Duncan Irschick, in collaboration with CG artist Johnson Martin and UMASS Amherst undergraduates Emma Hsiao and Braedon Fedderson, used media provided by the SOSF and data and open-access images to reconstruct these 3D shark species.
The interactive biology models enable website users to learn about different elements of shark and ray physiology. For instance, just allowing the cursor to hover over key features will bring up information on everything from how manta rays filter feed and why they are under threat to facts about how scientists use sharks’ dorsal fins to identify individuals in a population. The 3D models are open access, and free to view and download for non-profit use.
Although concerted efforts by researchers and educators are turning the tide for sharks and rays, significant challenges remain. More than one-third of these species are under threat of extinction, which means we still have much work to do to change misconceptions, banish misinformation and empower people with useful information so that they can also participate in conservation.
“The key to all our understanding of sharks – why they do what they do and what is needed to help them recover – relies on there being a foundation of basic, reliable life history information,” says Dr Lea.
The SOSF has a strong legacy of using communication and storytelling to do this, but this most recent commission with innovators from UMASS harnesses the power of creative design and technological advancement. The World of Sharks makes the reach for this kind of information global, but the SOSF is also excited to present very detailed and accurate information at the local scale.
Young visitors to the SOSF Shark Education Centre (SOSF-SEC) in Cape Town, South Africa, have an incredible opportunity to explore the rocky shores nearby in the Dalebrook marine protected area. This kind of in-person experience is irreplaceable, but to dive deeper into the reaches offshore requires technological wizardry and creative flair. A new website for the SOSF-SEC will host a diversity of 3D sharks that are found in False Bay, the largest bay in southern Africa. Children who would never otherwise dip below the waves to see these sharks will now be able to watch, for example, an endemic (found nowhere else in the world) catshark curl into a defensive doughnut-shape. Whether on iPads in the centre or at home online, learners can marvel at the most amazing feats of the sharks that live on their doorstep. Simulating behaviours like spyhopping in white sharks and demonstrating how sharks move in their environment give children an immersive experience, regardless of whether they have access to the ocean.
Still in the throes of the brainstorming and development that will expand these tools to their full potential, the director of the SOSF-SEC, Dr Clova Mabin, enthuses, “We also think that it might be possible to use the tools as a teaching aid in the classroom, to simulate field work. Learners could view them on the iPads and potentially take various measurements, comparing them across the different species.”
Sharks have special pores in their skin, known as the Ampullae of Lorenzini - that allows them to detect electrical signals. Each pore is filled with a highly conductive gel, which carries weak electrical signals from the surrounding seawater to a receptor cell.
A team of researchers has developed new LEDs which emit light simultaneously in two different wavelength ranges, for a simpler and more comprehensive way to monitor the freshness of fruit and vegetables. As the team write in the journal Angewandte Chemie, modifying the LEDs with perovskite materials causes them to emit in both the near-infrared range and the visible range, a significant development in the contact-free monitoring of food.
Perovskite crystals are able to capture and convert light. Being simple to produce and highly efficient, perovskites are already used in solar cells but are also being intensively researched for suitability in other technologies. Angshuman Nag and his team at the Indian Institute of Science Education and Research (IISER) in Pune, India, are now proposing a perovskite application in LED technology that could simplify the quality control of fresh fruit and vegetables.
Without light converters, LEDs would emit light in rather narrow light bands. To cover the whole range of white light produced by the sun, the diodes in “phosphor-converted” (pc) LEDs are coated with luminescent substances. Nag and his team have used a double emission coating with the purpose to produce pc-LEDs that emit both white (“normal”) light and also a strong band in the near-infrared range (NIR).
To make the dual-emission pc-LED, they applied a double perovskite doped with bismuth and chromium. Part of the bismuth component emits warm white light and another part transfers energy to the chromium component, de-exciting it and causing an additional emission in the NIR range, the researchers found out.
NIR is already used in the food industry to examine freshness in fruit and vegetables. Nag and PhD student Sajid Saikia, first author of the paper, explain their idea: "Food contains water, which absorbs the broad near-infrared emission at around 1000 nm. The more water that is present [due to rotting], the greater the absorption of near-infrared radiation, yielding darker contrast in an image taken under near-infrared radiation. This easy, non-invasive imaging process can estimate the water content in different parts of food, assessing its freshness."
Using these modified pc-LEDs to examine apples or strawberries, the team observed dark spots that were not visible in standard camera images. Illuminating the food with both white and NIR light revealed normal coloring that could be seen by the naked eye, as well as those parts which were starting to rot, but not yet visibly so.
Saikia and Nag envision a compact device for simultaneous visual and NIR food inspection, although the two detectors, one for visible light and one for NIR light, could make such an instrument costly for common applications. On the other hand, the researchers emphasize that the pc LEDs are easy to produce without any chemical waste or solvents and short-term costs could be more than recovered by the long service life and scalability of this novel dual-emitting pc-LED device.
(3098 characters)
About the Author
Angshuman Nag is an Associate Professor of Chemistry at IISER Pune, India. His group develops novel semiconductors with favorable optoelectronic properties such as defect-free nanocrystals, lead-free metal halide perovskites, semiconductors with luminescent, plasmonic, and magnetic properties, and surface-engineered nanocrystals.
A team of scientists has devised a system that replicates the movement of naturally occurring phenomena, such as hurricanes and algae, using laser beams and the spinning of microscopic rotors.
The breakthrough, reported in the journal Nature Communications, reveals new ways that living matter can be reproduced on a cellular scale.
“Living organisms are made of materials that actively pump energy through their molecules, which produce a range of movements on a larger cellular scale,” explains Matan Yah Ben Zion, a doctoral student in New York University’s Department of Physics at the time of the work and one of the paper’s authors. “By engineering cellular-scale machines from the ground up, our work can offer new insights into the complexity of the natural world.”
The research centers on vortical flows, which appear in both biological and meteorological systems, such as algae or hurricanes. Specifically, particles move into orbital motion in the flow generated by their own rotation, resulting in a range of complex interactions.
To better understand these dynamics, the paper’s authors, who also included Alvin Modin, an NYU undergraduate at the time of the study and now a doctoral student at Johns Hopkins University, and Paul Chaikin, an NYU physics professor, sought to replicate them at their most basic level. To do so, they created tiny micro-rotors—about 1/10th the width of a strand of human hair—to move micro-particles using a laser beam (Chaikin and his colleagues devised this process in a previous work).
The researchers found that the rotating particles mutually affected each other into orbital motion, with striking similarities to dynamics observed by other scientists in “dancing” algae—algae groupings that move in concert with each other.
In addition, the NYU team found that the spins of the particles reciprocate as the particles orbit.
“The spins of the synthetic particles reciprocate in the same fashion as that observed in algae—in contrast to previous work with artificial micro-rotors,” explains Ben Zion, now a researcher at Tel Aviv University. “So we were able to reproduce synthetically—and on the micron scale—an effect that is seen in living systems.”
“Collectively, these findings suggest that the dance of algae can be reproduced in a synthetic system, better establishing our understanding of living matter,” he adds.
The research was supported by grants from the Department of Energy (DE-SC0007991, SC0020976).
CAMBRIDGE, MA -- Many scholars, analysts, and other observers have suggested that resistance to innovation is an Achilles’ heel of authoritarian regimes. Such governments can fail to keep up with technological changes that help their opponents; they may also, by stifling rights, inhibit innovative economic activity and weaken the long-term condition of the country.
But a new study co-led by an MIT professor suggests something quite different. In China, the research finds, the government has increasingly deployed AI-driven facial-recognition technology to suppress dissent; has been successful at limiting protest; and in the process, has spurred the development of better AI-based facial-recognition tools and other forms of software.
“What we found is that in regions of China where there is more unrest, that leads to greater government procurement of facial-recognition AI, subsequently, by local government units such as municipal police departments,” says MIT economist Martin Beraja, who is co-author of a new paper detailing the findings.
What follows, as the paper notes, is that “AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.”
The scholars call this state of affairs an “AI-tocracy,” describing the connected cycle in which increased deployment of the AI-driven technology quells dissent while also boosting the country’s innovation capacity.
The open-access paper, also called “AI-tocracy,” appears in the August issue of the Quarterly Journal of Economics. An abstract of the uncorrected proof was first posted online in March. The co-authors are Beraja, who is the Pentti Kouri Career Development Associate Professor of Economics at MIT; Andrew Kao, a doctoral candidate in economics at Harvard University; David Yang, a professor of economics at Harvard; and Noam Yuchtman, a professor of management at the London School of Economics.
To conduct the study, the scholars drew on multiple kinds of evidence spanning much of the last decade. To catalogue instances of political unrest in China, they used data from the Global Database of Events, Language, and Tone (GDELT) Project, which records news feeds globally. The team turned up 9,267 incidents of unrest between 2014 and 2020.
The researchers then examined records of almost 3 million procurementcontracts issued by the Chinese government between 2013 and 2019, from a database maintained by China’s Ministry of Finance. They found that local governments’ procurement of facial-recognition AI services and complementary public security tools — high-resolution video cameras — jumped significantly in the quarter following an episode of public unrest in that area.
Given that Chinese government officials were clearly responding to public dissent activities by ramping up on facial-recognition technology, the researchers then examined a follow-up question: Did this approach work to suppress dissent?
The scholars believe that it did, although as they note in the paper, they “cannot directly estimate the effect” of the technology on political unrest. But as one way of getting at that question, they studied the relationship between weather and political unrest in different areas of China. Certain weather conditions are conducive to political unrest. But in prefectures in China that had already invested heavily in facial-recognition technology, such weather conditions are less conducive to unrest compared to prefectures that had not made the same investments.
In so doing, the researchers also accounted for issues such as whether or not greater relative wealth levels in some areas might have produced larger investments in AI-driven technologies regardless of protest patterns. However, the scholars still reached the same conclusion: Facial-recognition technology was being deployed in response to past protests, and then reducing further protest levels.
“It suggests that the technology is effective in chilling unrest,” Beraja says.
Finally, the research team studied the effects of increased AI demand on China’s technology sector and found the government’s greater use of facial-recognition tools appears to be driving the country’s tech sector forward. For instance, firms that are granted procurement contracts for facial-recognition technologies subsequently produce about 49 percent more software products in the two years after gaining the government contract than they had beforehand.
“We examine if this leads to greater innovation by facial-recognition AI firms, and indeed it does,” Beraja says.
Such data — from China’s Ministry of Industry and Information Technology — also indicates that AI-driven tools are not necessarily “crowding out” other kinds of high-tech innovation.
Adding it all up, the case of China indicates how autocratic governments can potentially reach a near-equilibrium state in which their political power is enhanced, rather than upended, when they harness technological advances.
“In this age of AI, when the technologies not only generate growth but are also technologies of repression, they can be very useful” to authoritarian regimes, Beraja says.
The finding also bears on larger questions about forms of government and economic growth. A significant body of scholarly research shows that rights-granting democratic institutions do generate greater economic growth over time, in part by creating better conditions for technological innovation. Beraja notes that the current study does not contradict those earlier findings, but in examining the effects of AI in use, it does identify one avenue through which authoritarian governments can generate more growth than they otherwise would have.
“This may lead to cases where more autocratic institutions develop side by side with growth,” Beraja adds.
Other experts in the societal applications of AI say the paper makes a valuable contribution to the field.
“This is an excellent and important paper that improves our understanding of the interaction between technology, economic success, and political power,” says Avi Goldfarb, the Rotman Chair in Artificial Intelligence and Healthcare and a professor of marketing at the Rotman School of Management at the University of Toronto. “The paper documents a positive feedback loop between the use of AI facial-recognition technology to monitor suppress local unrest in China and the development and training of AI models. This paper is pioneering research in AI and political economy. As AI diffuses, I expect this research area to grow in importance.”
For their part, the scholars are continuing to work on related aspects of this issue. One forthcoming paper of theirs examines the extent to which China is exporting advanced facial-recognition technologies around the world — highlighting a mechanism through which government repression could grow globally.
###
Support for the research was provided in part by the U.S. National Science Foundation Graduate Research Fellowship Program; the Harvard Data Science Initiative; and the British Academy’s Global Professorships program.
Artificial intelligence (AI) is all the rage lately in the public eye. How AI is being incorporated to the advantage of our everyday life despite its rapid development, however, remains an elusive topic that deserves the attention of many scientists. While in theory, AI can replace, or even displace, human beings from their positions, the challenge remains on how different industries and institutions can take advantage of this technological advancement and not drown in it.
Recently, a team of researchers at the Hong Kong University of Science and Technology (HKUST) conducted an ambitious study of AI applications on the education front, examining how AI could enhance grading while observing human participants’ behavior in the presence of a computerized companion. They found that teachers were generally receptive to AI’s input - until both sides came to an argument on who should reign supreme. This very much resembles how human beings interact with one another when a new member forays into existing territory.
The research was conducted by HKUST Department of Computer Science and Engineering Ph.D. candidate Chengbo Zheng and four of his teammates under the supervision of Associate Professor Prof. Xiaojuan MA. They developed an AI group member named AESER (Automated Essay ScorER) and separated twenty English teachers into ten groups to investigate the impact of AESER in a group discussion setting, where the AI would contribute in opinion deliberation, asking and answering questions and even voting for the final decision. In this study, designed akin to the controlled “Wizard of Oz” research method, a deep learning model and a human researcher would form joint input to AESER, which would then exchange views and conduct discussions with other participants in an online meeting room.
While the team expected AESER to promote objectivity and provide novel perspectives that would otherwise be overlooked, potential challenges were soon revealed. First, there was the risk of conformity, where the engagement of AI would soon create a majority to thwart discussions. Second, views provided by AESER were found to be rigid and even stubborn, which frustrated the participants when they found that an argument could never be “won”. Many also did not think AI’s input should be given equal weight and are more fit to play the role of an assistant to actual human work.
"At this stage, AI is deemed somewhat 'stubborn' by human collaborators, for good and bad,” noted Prof. Ma. “On the one hand, AI is stubborn so it does not fear to express its opinions frankly and openly. However, human collaborators feel disengaged when they could not meaningfully persuade AI to change its view. Humans varying attitudes towards AI. Some consider it to be a single intelligent entity while others regard AI as the voice of collective intelligence that emerges from big data. Discussions about issues such as authority and bias thus arise.”
The immediate next step for the team involves expanding its scope to gathermore quantitative data, which will provide more measurable and precise insights into how AI impacts group decision-making. They are also looking to explore large language models (LLMs) such as ChatGPT into the study, which could potentially bring new insights and perspectives to group discussions.
Their study was published at the ACM Conference on Human Factors in Computing Systems in April 2023.
AUKLAND, NZ and DURHAM, N.C. – Companion robots enhanced with artificial intelligence may one day help alleviate the loneliness epidemic, suggests a new report from researchers at Auckland, Duke, and Cornell Universities.
Their report, appearing in the July 12 issue of ScienceRobotics,maps some of the ethical considerations for governments, policy makers, technologists, and clinicians, and urges stakeholders to come together to rapidly develop guidelines for trust, agency, engagement, and real-world efficacy.
It also proposes a new way to measure whether a companion robot is helping someone.
“Right now, all the evidence points to having a real friend as the best solution,” said Murali Doraiswamy, MBBS, FRCP, professor of Psychiatry and Geriatrics at Duke University and member of the Duke Institute for Brain Sciences. “But until society prioritizes social connectedness and eldercare, robots are a solution for the millions of isolated people who have no other solutions.”
The number of Americans with no close friends has quadrupled since 1990, according to the Survey Center on American Life. Increased loneliness and social isolation may affect a third of the world population, and come with serious health consequences, such as increased risk for mental illness, obesity, dementia, and early death. Loneliness may even be as pernicious a health factor as smoking cigarettes, according to the U.S. Surgeon General Vivek H. Murthy, M.D.
While it is increasingly difficult to make new friends as an adult to help offset loneliness, making a companion robot to support socially isolated older adults may prove to be a promising solution.
“AI presents exciting opportunities to give companion robots greater skills to build social connection,” said Elizabeth Broadbent, Ph.D., professor of Psychological Medicine at Waipapa Taumata Rau, University of Auckland. “But we need to be careful to build in rules to ensure they are moral and trustworthy.”
Social robots like the ElliQ have had thousands of interactions with human users, nearly half related to simple companionship, including company over a cup of tea or coffee. A growing body of research on companion robots suggests they can reduce stress and loneliness and can help older people remain healthy and active in their homes.
Newer robots embedded with advanced AI programs may foster stronger social connections with humans than earlier generations of robots. Generative AI like ChatGPT, which is based on large language models, allows robots to engage in more spontaneous conversations, and even mimic the voices of old friends and loved ones who have passed away.
Doctors are mostly on board, too, the authors point out. A Sermo survey of 307 care providers across Europe and the United States showed that 69% of physicians agreed that social robots could provide companionship, relieve isolation, and potentially improve patients’ mental health. Seventy percent of doctors also felt insurance companies should cover the cost of companion robots if they prove to be effective friendship supplement. How to measure a robot’s impact, though, remains tricky.
This lack of measurability highlights the need to develop patient-rated outcome measures, such as the one being developed by the authors. The “Companion Robot Impact Scale” (Co-Bot-I-7) aims to establish the impact on physical health and loneliness, and is showing that companion machines might already be proving effective.
Early results from Broadbent’s lab, for example, find that amiable androids help reduce stress and even promote skin healing after a minor wound.
“With the right ethical guidelines,” the authors conclude in their report, “we may be able to build on current work to use robots to create a healthier society.”
In addition to Dr. Doraiswamy and Professor Broadbent, study authors include Mark Billinghurst, Ph.D., and Samantha Boardman, M.D.
Professor Broadbent and Dr. Doraiswamy have served as advisors to Sermo and technology companies. Dr. Doraiswamy, Professor Broadbent, and Dr. Boardman are co-developers of the Co-Bot-I-7 scale.
CITATION: “Enhancing Social Connectedness With Companion Robots Employing AI,” Elizabeth Broadbent, Mark Billinghurst, Samantha G. Boardman, P. Murali Doraiswamy. Science Robotics, July 12, 2023. DOI: 10.1126/scirobotics.adi6347
Enhancing Social Connectedness With Companion Robots Employing AI
ARTICLE PUBLICATION DATE
12-Jul-2023
COI STATEMENT
Professor Broadbent and Dr. Doraiswamy have served as advisors to Sermo and technology companies. Dr. Doraiswamy, Professor Broadbent, and Dr. Boardman are co-developers of the Co-Bot-I-7 scale.
Use of ChatGPT improves productivity, with particular benefits to those with weaker skills
AMERICAN ASSOCIATION FOR THE ADVANCEMENT OF SCIENCE (AAAS)
The use of ChatGPT – a chatbot that can generate human-like text – raises productivity in professional writing tasks and reduces productivity inequality in those who use it, according to a new study involving over 400 college-educated professionals. Although the findings reveal direct and immediate effects of ChatGPT on worker productivity, study authors Shakked Noy and Whitney Zhang note that longer-term impacts on complex labor market dynamics, which will likely arise as firms and workers adapt to ChatGPT, remain unknown. “Overall, the arrival of ChatGPT ushers in an era of vast uncertainty about the economic and labor market effects of AI technologies,” write the authors. Our experiment takes the first step toward answering the many questions that have arisen.” The recent and rapid advancements in generative AI systems, particularly platforms like ChatGPT or DALL-E, are unique compared to most historical automation technologies. In the past, automation has affected more routine tasks consisting of explicit sequences or steps, like manufacturing or bookkeeping tasks. However, generative AI technologies are becoming quite adept at performing more creative and difficult-to-codify tasks like writing or image generation, which have long relied on specialized and educated workers. According to Noy and Zhang, like other forms of automation, a potent writing tool such as ChatGPT can potentially enhance workers’ productivity, offering particular benefits to those with weaker skills. It could also make some kinds of writers obsolete, replacing them entirely. Here, Noy and Zhang evaluated these outcomes in the context of diverse professional writing tasks. In a pre-registered online experiment, the authors assigned incentivized, occupation-specific writing tasks to 453 college-educated professionals, half of whom were allowed to use ChatGPT. The findings show that 80% of those allowed to use ChatGPT did and that the writers in this group were substantially more productive than the control group. Not only did the time taken to complete tasks decrease by 40%, but the output quality also rose by 18%. What’s more, the authors found that participants with weaker skills benefited the most from the use of ChatGPT, illustrating a reduction in overall inequality among workers.
AMERICAN ASSOCIATION FOR THE ADVANCEMENT OF SCIENCE (AAAS)
In this special issue of Science, nine pieces – including Perspectives, Policy Forums, and Reviews – highlight recent advancements in artificial intelligence (AI) technologies and how they’re being used to answer novel questions in topics ranging from human health to animal behavior. However, the recent widespread adoption of AI in these areas is not without unique ethical concerns and policy challenges. “By looking to the forefront of how AI is being used in science and society, many grand challenges and benefits appear,” writes Gemma Alderton, deputy editor at Science.
AI-predicted race variables from medical images pose risks and opportunities for studying health disparities, say James Zou and colleagues in a Perspective. Hundreds of AI-assisted medical devices are currently used in diverse medical tasks, such as assessing health risks and diagnosing diseases like cancer. Some studies have shown that AI models can infer race variables – albeit in crude, simplistic categories – directly from medical images like chest x-rays and cardiac ultrasounds, despite no known human-readable race correlates in the images. “Although race variables are not a generally meaningful category in medicine, the ability of AI to predict race variables from medical images could be useful for monitoring health care disparity and ensuring that algorithms work well across diverse populations,” Zou et al. write. In a second Perspective, Matthew DeCamp and Charlotta Lindvall highlight how examination of bias in AI and healthcare has tended toward removing bias from datasets, analyses, or in AI development teams. However, DeCamp and Lindvall argue that it will also require reducing biases in how clinicians and patients use AI-based algorithms, which could be more challenging than reducing biases in the algorithms themselves.
AI technologies also show great promise in expanding our understanding of animal behaviors. In a third Perspective, Christian Rutz and colleagues review how machine learning (ML) methods are being used to decode animal communication systems. Understanding how animals communicate presents a host of challenges – animals use a wide range of communication adaptations, including visual, acoustic, tactile, chemical, and electrical signals, often in ways beyond humans’ perceptive abilities. Here, Rutz et al. review ways in which increasingly powerful ML tools are being used to reveal previously hidden complexity in animals’ communicative behavior, with insights that could lead to potential benefits for animal welfare and conservation. “…it is essential that future advances are used to benefit the animals being studied,” write Rutz et al.
In a fourth Perspective, Peter Wurman and colleagues highlight how games provide controlled opportunities to isolate and practice many problem-solving skills that are more broadly transferable to real-world applications, which makes them valuable training grounds for intelligent machines. While the recent dominance of AI in classic strategy games has largely been achieved, Wurman and colleagues argue that video games pose new types of challenges for AI to conquer. Making progress in these arenas will represent a substantial step toward much more capable and flexible AI systems that operate in the physical world.
Generative AI – a type of AI technology that can produce a wide variety of content such as images, videos, audio, and text – has rapidly become widely adopted by the general public, scientists, and technologists. However, a growing number of professional artists, writers, and musicians have raised objections to the use of their creations as training data for these systems. In a Policy Forum, Pamela Samuelson highlights this emerging issue and discusses how several copyright lawsuits, now underway in the U.S., could have substantial implications for the future of generative AI systems. If the plaintiffs in these cases prevail, the only material generative AI systems could lawfully be trained on would be public domain works or those under licenses, which would affect everyone who uses the technology, including for scientific research. In a second Policy Forum, Ajay Agrawal and colleagues discuss how task automation via AI innovations could reverse current trends of increasing income inequality. Given the rapid development of AI technologies that enable automation of cognitive and creative endeavors once reserved for humans with specialized education and experience, some economists have raised concerns that that AI has the potential to substantially disrupt the labor market and further increase inequality, albeit with little benefit to productivity and standard of living. Here, Agrawal et al. argue that, by considering how tasks can be automated, AI developers could create tools that enhance the overall productivity of workers. What’s more, AI automation could also reduce income inequality by offering innovations that allow lower-wage and less skilled workers to perform at levels that would previously require specialized training.
In one Review, Felix Wong and colleagues discuss how advances in AI are empowering medical and biotechnological research in the fight against infectious disease. According to Wong et al., AI technologies, like ML, have led to rapid advancements in anti-infective drug discovery, our understanding of infection biology, and the development of new diagnostics. Further applications could also improve our ability to forecast and control infectious disease outbreaks and pandemics. A second Review by Bing Huang and colleagues focuses on the crucial role “Destiny Functional Theory” – pivotal in chemical and materials science because of its relatively high predictive power – has played in the development of ML-based models used to navigate chemical compound space. Huang et al. argue that continued advancements in this space pave the way toward software control solutions that can routinely handle exotic chemistries and formulations within self-driving laboratories.
Lastly, a series of Vignettes by various authors highlight AI’s applications in advanced medical robots. AI technologies used in these devices, including computer vision, medical imaging analysis, precise manipulation, and ML, could enable autonomous robots to perform diagnostic imaging and assist in complex surgical procedures. Furthermore, AI in wearable rehabilitation devices and advanced prosthetics could enable more personalized patient care and even AI-powered prosthetics that operate seamlessly with the human user.