Friday, June 16, 2023

If art is how we express our humanity, where does AI fit in?


Media Lab researcher Ziv Epstein discusses issues arising from the use of generative AI to make art and other media

Peer-Reviewed Publication

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

Generative AI Art 

IMAGE: IN A NEW PAPER, RESEARCHERS ATTEMPT TO PROVIDE A FRAMEWORK FOR DISCUSSIONS ABOUT GENERATIVE AI’S IMPACT ON CREATIVE WORK AND SOCIETY MORE BROADLY. THIS IMAGE WAS CREATED BY PAPER COAUTHOR AND ARTIST MEMO AKTEN. view more 

CREDIT: COURTESY OF MEMO AKTEN, USING CUSTOM AI SOFTWARE




The rapid advance of artificial intelligence has generated a lot of buzz, with some predicting it will lead to an idyllic utopia and others warning it will bring the end of humanity. But speculation about where AI technology is going, while important, can also drown out important conversations about how we should be handling the AI technologies available today.

One such technology is generative AI, which can create content including text, images, audio, and video. Popular generative AIs like the chatbot ChatGPT generate conversational text based on training data taken from the internet.

Today a group of 14 researchers from a number of organizations including MIT published a commentary article in Science that helps set the stage for discussions about generative AI’s immediate impact on creative work and society more broadly. The paper’s MIT-affiliated co-authors include Media Lab postdoctoral researcher Ziv Epstein SM ’19, PhD ’23; recent graduate Matt Groh SM ’19, PhD ’23; MIT PhD candidate Rob Mahari ’17; and Media Lab research assistant Hope Schroeder.

MIT News spoke with Epstein, the lead author of the paper.

Q: Why did you write this paper?

A: Generative AI tools are doing things that even a few years ago we never thought would be possible. This raises a lot of fundamental questions about the creative process and the human’s role in creative production. Are we going to get automated out of jobs? How are we going to preserve the human aspect of creativity with all of these new technologies?

The complexity of black-box AI systems can make it hard for researchers and the broader public to understand what’s happening under the hood, and what the impacts of these tools on society will be. Many discussions about AI anthropomorphize the technology, implicitly suggesting these systems exhibit human-like intent, agency or self-awareness. Even the term “Artificial intelligence” reinforces these beliefs: ChatGPT uses first-person pronouns, and we say AIs “hallucinate.” These agentic roles we give AIs can undermine the credit to creators whose labor underlies the system’s outputs, and can deflect responsibility from the developers and decision makers when the systems cause harm.

We’re trying to build coalitions across academia and beyond to help think about the interdisciplinary connections and research areas necessary to grapple with the immediate dangers to humans coming from the deployment of these tools, such as disinformation, job displacement, and changes to legal structures and culture.

Q: What do you see as the gaps in research around generative AI and art today?

A: The way we talk about AI is broken in many ways. We need to understand how perceptions of the generative process affect attitudes toward outputs and authors, and also design the interfaces and systems in a way that is really transparent about the generative process and avoids some of these misleading interpretations. How do we talk about AI and how do these narratives cut along lines of power? As we outline in the article, there are these themes around AI’s impact that are important to consider: aesthetics and culture; legal aspects of ownership and credit; labor; and the impacts to the media ecosystem. For each of those we highlight the big open questions.

With aesthetics and culture, we’re considering how past art technologies can inform how we think about AI. For example, when photography was invented, some painters said it was “the end of art.” But instead it ended up being its own medium and eventually liberated painting from realism, giving rise to Impressionism and the modern art movement. We’re saying generative AI is a medium with its own affordances. The nature of art will evolve with that. How will artists and creators express their intent and style through this new medium?

Issues around ownership and credit are tricky because we need copyright law that benefits creators, users, and society at large. Today’s copyright laws might not adequately apportion rights to artists when these systems are training on their styles. When it comes to training data, what does it mean to copy? That’s a legal question, but also a technical question. We’re trying to understand if these systems are copying, and when.

For labor economics and creative work, the idea is these generative AI systems can accelerate the creative process in many ways, but they can also remove the ideation process that starts with a blank slate. Sometimes, there’s actually good that comes from starting with a blank page. We don’t know how it’s going to influence creativity, and we need a better understanding of how AI will affect the different stages of the creative process. We need to think carefully about how we use these tools to complement people’s work instead of replacing it.

In terms of generative AI’s effect on the media ecosystem, with the ability to produce synthetic media at scale, the risk of AI-generated misinformation must be considered. We need to safeguard the media ecosystem against the possibility of massive fraud on one hand, and people losing trust in real media on the other.

Q: How do you hope this paper is received — and by whom?

A: The conversation about AI has been very fragmented and frustrating. Because the technologies are moving so fast, it’s been hard to think deeply about these ideas. To ensure the beneficial use of these technologies, we need to build shared language and start to understand where to focus our attention. We’re hoping this paper can be a step in that direction. We’re trying to start a conversation that can help us build a roadmap toward understanding this fast-moving situation.

Artists many times are at the vanguard of new technologies. They’re playing with the technology long before there are commercial applications. They’re exploring how it works, and they’re wrestling with the ethics of it. AI art has been going on for over a decade, and for as long these artists have been grappling with the questions we now face as a society. I think it is critical to uplift the voices of the artists and other creative laborers whose jobs will be impacted by these tools. Art is how we express our humanity. It’s a core human, emotional part of life. In that way we believe it’s at the center of broader questions about AI’s impact on society, and hopefully we can ground that discussion with this.

###

Written by Zach Winn, MIT News Office

Illusions are in the eye, not the mind


Peer-Reviewed Publication

UNIVERSITY OF EXETER

Simultaneous contrast 

IMAGE: THE BAR IN THE MIDDLE OF THIS FIGURE IS ALL ONE GREY LEVEL, BUT IT APPEARS LIGHTER ON THE LEFT AND DARKER ON THE RIGHT DUE TO THE GRADIENT IN THE BACKGROUND. THIS IS CALLED SIMULTANEOUS CONTRAST, WHERE DARK SURROUNDS MAKE TARGETS APPEAR LIGHTER, AND VICE-VERSA. view more 

CREDIT: JOLYON TROSCIANKO




Numerous visual illusions are caused by limits in the way our eyes and visual neurones work – rather than more complex psychological processes, new research shows.

Researchers examined illusions in which an object’s surroundings affect the way we see its colour or pattern.

Scientists and philosophers have long debated whether these illusions are caused by neural processing in the eye and low-level visual centres in the brain, or involve higher-level mental processes such as context and prior knowledge.

In the new study Dr Jolyon Troscianko, from the University of Exeter, co-developed a model that suggests simple limits to neural responses – not deeper psychological processes – explain these illusions.

“Our eyes send messages to the brain by making neurones fire faster or slower,” said Dr Troscianko, from the Centre for Ecology and Conservation on Exeter’s Penryn Campus in Cornwall.  

“However, there’s a limit to how quickly they can fire, and previous research hasn’t considered how the limit might affect the ways we see colour.”

The model combines this “limited bandwidth” with information on how humans perceive patterns at different scales, together with an assumption that our vision performs best when we are looking at natural scenes.

The model was developed by researchers from the Universities of Exeter and Sussex to predict how animals see colour, but it was also found to correctly predict many visual illusions seen by humans.

“This throws into the air a lot of long-held assumptions about how visual illusions work,” Dr Troscianko said.

He said the findings also shed light on the popularity of high-definition televisions.

“Modern high dynamic range televisions create bright white regions that are over 10,000 times brighter than their darkest black, approaching the contrast levels of natural scenes,” Dr Troscianko added.

“How our eyes and brains can handle this contrast is a puzzle because tests show that the highest contrasts we humans can see at a single spatial scale is around 200:1.

“Even more confusingly, the neurones connecting our eyes to our brains can only handle contrasts of about 10:1.

“Our model shows how neurones with such limited contrast bandwidth can combine their signals to allow us to see these enormous contrasts, but the information is ‘compressed’ – resulting in visual illusions.

“The model shows how our neurones are precisely evolved to use of every bit of capacity.

"For example, some neurones are sensitive to very tiny differences in grey levels at medium-sized scales, but are easily overwhelmed by high contrasts.

“Meanwhile, neurones coding for contrasts at larger or smaller scales are much less sensitive, but can work over a much wider range of contrasts, giving deep black-and-white differences.

“Ultimately this shows how a system with a severely limited neural bandwidth and sensitivity can perceive contrasts larger than 10,000:1.”

The paper, published in the journal PLOS Computational Biology, is entitled: “A model of colour appearance based on efficient coding of natural images.”

The two grey bars in the middle of this figure are the same grey, but the one on the left (surrounded by more black bars) appears darker. This is the opposite of the simultaneous contrast example above, because darker surrounds now make the target look darker.

Both cubes have what appear to be yellow and blue tiles on their top surfaces. However, the ones that look yellow on the left are in fact a grey colour that is identical to the blue tiles on the right. Our model can help explain how objects appear to be the same colour even when the light changes, and why in illusions such grey looks colourful.

CREDIT

Jolyon Troscianko

Cracks in the glass ceiling?


1960s landmark anti-discrimination laws intended to increase wages for women may have been more effective than previously thought

Reports and Proceedings

UNIVERSITY OF CALIFORNIA - LOS ANGELES

Two pieces of landmark federal legislation intended to lessen the gender pay gap have long been deemed by many scholars to have been ineffectual. Despite the passage of the 1963 Equal Pay Act, signed into law 60 years ago on June 10, and Title VII of the 1964 Civil Rights Act — mandating equal pay for equal work and prohibiting employment discrimination against women — median salaries were thought to have maintained a persistent gender pay gap through the 1980s.

But now, economic researchers, in a new working paper published by the National Bureau of Economic Research, bring positive news, according to Martha Bailey, UCLA professor of economics and the paper’s lead author. The research found that an examination of more granular data shows that the gender gap decreased between the 1960s and 1980s in certain states and certain jobs.

Between 1964 and 1968, wages for women increased an average of 11% and by more in lower wage jobs. In addition, the U.S. Department of Labor’s enforcement and awarding of back pay motivated some employers to do away with wage discrimination in contracts and negotiations.

“There’s been a longstanding concern that these laws did not change practice on the ground,” Bailey said. “But one thing we’ve learned from this research is that legislation, mixed with strong enforcement, did a lot to narrow pay gaps.”

Bailey says that one of the difficulties in evaluating the effectiveness of the laws has been that federal legislation applies across the entire U.S. at one time. But given that economies, labor markets and policies change in every state, many different factors contribute to maintaining persistent gaps. Research was needed to discover where gains occurred and where they did not.

Enforcement of Title VII of the Civil Rights Act came in 1971, with the landmark Supreme Court case Phillips v. Martin Marrietta Corp., which deemed it illegal to not hire women with school-aged children if a man with school-aged children could be hired. The following year, the Equal Employment Opportunity Act of 1972 gave the Equal Employment Opportunity Commission the authority to pursue independent lawsuits and expanded Title VII’s coverage. With strengthened enforcement, firms changed hiring practices in a number of fields, including newspapers, which coincidentally had continued to print sex-segregated help-wanted ads through the 1970s.

The research found that wage gains for women were in jobs where wages were below the 50th percentile. Some of the most significant increases occurred in jobs such as department store clerk, food service worker and janitor.

Bailey says that in many of these lower paying jobs, it may have been easier to determine when a man and woman were doing precisely the same job and therefore judge the equality of work. This would have made enforcement easier, too, perhaps compelling employers to comply with the new laws.

“Overall, the research tells us that legislation coupled with enforcement was important for narrowing pay gaps,” said Bailey. “However, it also tells a cautionary tale — that something that narrows pay gaps in the short term can increase pay gaps across jobs in the long run if firms respond by hiring or promoting fewer women.”

Fungal infections an unintended consequence of advanced immunotherapy, research shows


The Desai Lab shows how clinical use of some monoclonal antibodies may cause life-threatening systemic fungal infections


Peer-Reviewed Publication

HACKENSACK MERIDIAN HEALTH

Dr. Jigarkumar Desai, of the Hackensack Meridian CDI 

IMAGE: DR. JIGARKUMAR DESAI, OF THE HACKENSACK MERIDIAN CDI view more 

CREDIT: (HACKENSACK MERIDIAN HEALTH)



Major fungal infections have become more common across the globe, and one unexpected phenomenon among the rise of fungi is life-threatening infections as a result of a complication of certain immunotherapies and small molecule kinase inhibitors. 

 

A scientist at the Hackensack Meridian Center for Discovery and Innovation (CDI) has identified the specific mechanistic cause of one such phenomenon, which will likely save lives into the future, via a new publication. 

 

The paper “C5a-licensed phagocytes drive sterilizing immunity during systemic fungal infection” appeared in the journal Cell on May 22

 

“Our findings will assist clinicians in their understanding of how these life-threatening infections are emerging,” said Jigar Desai, Ph.D., assistant member of the CDI, assistant professor of medical sciences at the Hackensack Meridian School of Medicine, and first author of the paper. “These findings may help doctors and scientists alike better understand how some of these cases arise - and how to avoid them.”

 

The team of scientists established that the C5a protein, the penultimate effector constituent of the complement pathway, is key to the body's innate ability to fight systemic fungal infections. Additionally, the team also identified enhanced complement pathway signature acts as a predictive biomarker for systemic candidiasis. With the use of animal models, patient data and sera, the team showed how C5a and its downstream effects are crucial for the body’s immune cells, specifically neutrophils and macrophages, to clear the fungus Candida albicans, when it has overtaken the body’s natural defenses. 

 

Desai and the team - which includes colleagues from the National Institutes of Health, Duke University, and Mount Sinai, among others - showed this in stages, both in animal models and in patient serum, by isolating what roles the C5 plays. 

 

In addition to uncovering induced complement signature as a potential biomarker for systemic candidiasis, this work will be highly impactful in the clinical setting, where complement C5-targeted therapeutics, such as the anti-C5 monoclonal antibodies eculizumab/ravulizumab (as well as the C5a receptor inhibitor, avacopan) are the treatment of choice. In these settings, findings from this work emphasizes the importance of vigilant surveillance for opportunistic fungal infections, where early diagnosis can improve patient outcomes.

 

"Our findings establish a new paradigm in immunobiology, demonstrating for the first time the direct critical role of cell-intrinsic complement generation for effective host defense against Candida,” write the authors. "The multifaceted translation of our work shows promise for the development of individualized risk stratification and prognostication strategies in patients at-risk for invasive fungal disease."

 

Desai, a fungal expert who joined the CDI last year, has had other recent publications. 

In two papers in 2022 he and colleagues showed a particular genetic deficiency may weaken certain people to the onslaught of a certain plant pathogenic fungus, and also explore how systemic candidiasis may actually carry enhanced mortality following the use of broad-spectrum antibiotics. Those papers were published in The Journal of Clinical Investigation, and Cell Host and Microbe.

 

“Jigar Desai is uncovering novel insights into life-threatening fungal diseases,” said David Perlin, Ph.D., chief scientific officer and executive vice president of the CDI, who is also an expert in fungal infections. “As we know, this is an emerging health problem, and it’s key to have his work drive our understanding forward.”

 

New biotech venture PHIOGEN, a spinoff of BCM’s TAILOR Labs, to tackle the global threat of antimicrobial resistance


Business Announcement

BAYLOR COLLEGE OF MEDICINE

The new biotech venture PHIOGEN is a spin-off company from Baylor College of Medicine’s TAILOR Labs, one of the United States only academic phage therapy cores with a decade’s worth of revolutionary research related to bacteriophages, viruses that infect and destroy bacteria.

The company made its debut at the 6th World Conference on Targeting Phage Therapy in Paris, June 1-2, 2023.

PHIOGEN’s R&D efforts are led by phage researcher Dr. Anthony Maresso, founder of TAILOR Labs and associate professor of molecular virology and microbiology at Baylor, whose phage therapy work has attracted funding of more than $5 million to date.

The globally renowned research team behind PHIOGEN is housed in the world’s largest medical complex inside the prestigious Texas Medical Center’s Innovation Hub.

PHIOGEN has developed a world-first technology platform that mobilizes the natural power of bacteriophages to tackle critical and life-threatening infections. This marks a significant medical breakthrough for countering the global threat of antimicrobial resistance.

The World Health Organization deems drug resistant infections as one of the top 10 global public health threats facing humanity with estimates of over 5 million deaths worldwide attributed to antibiotic resistant infections.

The proprietary first-of-its-kind technology platform that is being spearheaded by PHIOGEN is able to discover and screen at-scale naturally occurring bacteriophages, singling out those with elite bacteria-fighting abilities, and directing biological changes to evolve the phage into antimicrobials that overcome resistance.

This creates a new business model for phage therapy as the group is able to create products that treat populations of people instead of on a per patient basis. By optimizing nature’s defenders, the team has produced unprecedented phage treatments which have already successfully saved the lives of several patients in FDA approved, compassionate use cases.

“We receive high-performing phage fighters that are trained and ready to deliver safe and effective treatments for clinical applications,” said Amanda Burkardt, CEO at PHIOGEN.

###

About PHIOGEN                                                                                                         

PHIOGEN™ is a trademark of PHIOGEN INC. PHIOGEN is an innovative biotech company housed in the Texas Medical Center Innovation Hub. It is committed to using proven technology to deliver patient-ready bacteriophage products to tackle the most deadly and serious bacterial infections. PHIOGEN’s world-class patented process has received early proof of concept validation through several in vivo studies as well as for patients in FDA-approved compassionate use cases.

Learn more about the clinical successes of phage therapy in this video interview with Dr. Maresso.

 

SEE  

https://plawiuk.blogspot.com/search?q=PHAGES

How microgrids can help communities adapt to wildfires

For some of the most vulnerable communities, clean energy microgrids can be both more effective and cheaper than conventional technologies


Peer-Reviewed Publication

DOE/LAWRENCE BERKELEY NATIONAL LABORATORY

Featured image 

IMAGE: GRIFFITH PARK BRUSH WILDFIRE AND THE WOOLSEY FIRE IN CALIFORNIA. view more 

CREDIT: JORGE VILLALBA/ISTOCK



– By Dan Mullen

Wildfires have become increasingly frequent due to climate change, with record occurrences in areas not historically prone to them. In California, wildfires and regional power shutoffs have cost billions and taken lives. For some 46 million Americans living next to forests – at what scientists call the “wildland-urban interface” (WUI) – the risks of wildfire can be especially acute.

Microgrids can build resilience in vulnerable communities. By using small-scale, local energy sources and disconnecting from regional grids during emergencies, they can deliver essential services to keep homes and communities safe. But conventional microgrids with diesel generators are both too expensive and too polluting to be viable at scale.

An international team led by research scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has found that clean energy microgrids offer a better and cheaper solution for protecting California communities from wildfire-related outages, compared to conventional microgrids. These systems, which primarily rely on solar and batteries, can be built at a cost well below what households typically pay for electricity and can reduce the impact of power outages (by minimizing public safety power shutdown time) by a factor of up to 30.

“This is the first detailed, state-level study that’s looked at how clean energy microgrids can minimize outage impacts on vulnerable communities, and how much it would cost,” said Tianzhen Hong, a co-author and senior scientist in the Building Technology & Urban Systems Division at Berkeley Lab. The models developed for the Applied Energy study, Hong added, can help interested parties understand where these communities are located; how clean energy microgrids could be designed; and how much it would cost to reduce outages below a desired threshold. 

“In certain areas of California, ‘Public Safety Power Shutoffs’ could remove access to electricity for up to 7% of the year,” said Dasun Perera, referring to the regional outages employed to lessen wildfire risk. A former Berkeley Lab postdoc who is now a researcher at the Andlinger Center for Energy and Environment at Princeton University, Perera said clean energy microgrids can cut these impacts by more than half, at a cost ranging from 15 to 30 cents per kilowatt-hour (kWh). Californians pay about 25 cents per kWh on average for residential electricity.

The study evaluated clean energy microgrids in seven locations with different climatic conditions across California’s vast WUI, which spans the state. Novel modeling tools developed for the study helped researchers select communities based on wildfire risk and renewable energy potential, and then design microgrids to meet the specific needs of households in wildfire-prone areas, where heating and cooling account for most energy use. The microgrids – which included conventional generators, but rarely used them – enabled communities to run on at least 60% renewable energy year-round, while significantly reducing heating and cooling emissions and minimizing the burden renewables can impose on regional grids.

Importantly for Hong, clean energy microgrids can promote energy equity: equal access to clean technologies and their associated benefits. Most WUI communities are not wealthy, and they can be disadvantaged by issues relating to access, mobility, and public health.

“We’re really talking about equity here,” Hong said. “The technology can be really good, but at the end of the day, if people can’t afford it then nothing happens.” Under the Bipartisan Infrastructure Law, extensive federal support has been directed towards the type of community energy installations the study describes.

As a next step, Hong and Perera hope to work with stakeholders – city governments, utilities, and others – to help design actual microgrids that deliver real-world benefits while furthering their research capabilities. All data from the study is freely available to the public, and with additional development its models could support public planning efforts.

More broadly, Hong and Perera continue to investigate how a warming climate might affect future wildfires and other extreme events, and what types of clean energy infrastructure can bolster community resilience cost-effectively and at scale. Another study, also published in  Applied Energy, found that a unique approach to harnessing geothermal energy – namely, using underground aquifers as a type of thermal energy storage – can help communities increase their use of renewables while enabling the broader energy system to withstand climate variations far into the future.

Researchers from TU Wien; the HKUST Shenzhen-Hong Kong Collaborative Innovation Research Institute; the Hong Kong University of Science and Technology; and the University of California, Berkeley participated in the microgrid study, which was partially funded by the Project of Hetao Shenzhen – Hong Kong Science and Technology Innovation Cooperation Zone.

California map illustrating a selection of communities suitable for microgrid implementation.

CREDIT

Bingyu Zhao/Vienna University of Technology

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.