Monday, April 15, 2024

 

Quantum crystal of frozen electrons—the Wigner crystal—is visualized for the first time



Princeton University researchers detect a strange form of matter that has eluded direct detection for some 90 years



PRINCETON UNIVERSITY

Video of a Wigner crystal 

VIDEO: 

THE VIDEO DESCRIBES MELTING PROCESSES OF AN ELECTRON WIGNER CRYSTAL INTO ELECTRON LIQUID PHASES. AS THE ELECTRON DENSITY (\NU, A MEASURE OF NUMBER OF ELECTRONS IN A MAGNETIC FIELD, IS CONTROLLED BY APPLYING ELECTRIC VOLTAGES) IS INCREASED, MORE ELECTRONS (DARK BLUE SITES) ENTER THE FIELD OF VIEW, AND A PERIODIC STRUCTURE OF A TRIANGULAR LATTICE EMERGES. THE PERIODIC STRUCTURE IS FIRST MELTED (NEAR \NU = 0.334) WHERE THE MAP SHOWS HOMOGENEOUS SIGNALS. AND THEN IT REAPPEARS AT HIGHER DENSITY \NU, AND EVENTUALLY MELTS AGAIN (\NU = 0.414).

view more 

CREDIT: YEN-CHEN TSUI, PRINCETON UNIVERSITY




Electrons—these infinitesimally small particles that are known to zip around atoms—continue to amaze scientists despite the more than a century that scientists have studied them. Now, physicists at Princeton University have pushed the boundaries of our understanding of these minute particles by visualizing, for the first time, direct evidence for what is known as the Wigner crystal—a strange kind of matter that is made entirely of electrons.

The finding, published in the April 11th issue of the journal Nature, confirms a 90-year-old theory that electrons can assemble into a crystal-like formation of their own, without the need to coalesce around atoms.  The research could help lead to the discovery of new quantum phases of matter when electrons behave collectively.

“The Wigner crystal is one of the most fascinating quantum phases of matter that has been predicted and the subject of numerous studies claiming to have found at best indirect evidence for its formation,” said Al Yazdani, the James S. McDonnell Distinguished University Professor in Physics at Princeton University and the senior author of the study. “Visualizing this crystal allows us not only to watch its formation, confirming many of its properties, but we can also study it in ways you couldn’t in the past.”

In the 1930s, Eugene Wigner, a Princeton professor of physics and winner of the 1963 Nobel Prize for his work in quantum symmetry principles, wrote a paper in which he proposed the then-revolutionary idea that interaction among electrons could lead to their spontaneous arrangement into a crystal-like configuration, or lattice, of closely packed electrons. This could only occur, he theorized, because of their mutual repulsion and under conditions of low densities and extremely cold temperatures.

“When you think of a crystal, you typically think of an attraction between atoms as a stabilizing force, but this crystal forms purely because of the repulsion between electrons,” said Yazdani, who is the inaugural co-director of the Princeton Quantum Institute and director of the Princeton Center for Complex Materials.

For a long time, however, Wigner’s strange electron crystal remained in the realm of theory. It was not until a series of much later experiments that the concept of an electron crystal transformed from conjecture to reality. The first of these was conducted in the 1970s when scientists at Bell Laboratories in New Jersey created a “classical” electron crystal by spraying electrons on the surface of helium and found that they responded in a rigid manner like a crystal. However, the electrons in these experiments were very far apart and behaved more like individual particles than a cohesive structure. A true Wigner crystal, instead of following the familiar laws of physics in the everyday world, would follow the laws of quantum physics, in which the electrons would act not like individual particles but more like a single wave.

This led to a whole series of experiments over the next decades that proposed various ways to create quantum Wigner crystals. These experiments were greatly advanced in the 1980s and 1990s when physicists discovered how to confine electrons’ motion to atomically thin layers using semiconductors. The application of a magnetic field to such layered structures also makes electrons move in a circle, creating favorable conditions for crystallization. But these experiments were never able to directly observe the crystal. They were only able to suggest its existence or indirectly infer it from how electrons flow through the semiconductor.   

“There are literally hundreds of scientific papers that study these effects and claim that the results must be due to the Wigner crystal,” Yazdani said, “but one can’t be sure, because none of these experiments actually see the crystal.”

An equally important consideration, Yazdani noted, is that what some researchers think is evidence of a Wigner crystal could be the result of imperfections or other periodic structures inherent to the materials used in the experiments. “If there are any imperfections, or some form of periodic substructure in the material, it is possible to trap electrons and find experimental signatures that are not due to the formation of a self-organized ordered Wigner crystal itself, but due to electrons ‘stuck’ near an imperfection or trapped because of the material’s structure,” he said.

With these considerations in mind, Yazdani and his research team set about to see whether they could directly image the Wigner crystal using a scanning tunneling microscope (STM), a device that relies on a technique called “quantum tunneling” rather than light to view the atomic and subatomic world. They also decided to use graphene, an amazing material that was discovered in the 21st century and has been used in many experiments involving novel quantum phenomena. To successfully conduct the experiment, however, the researchers had to make the graphene as pristine and as devoid of imperfections as possible. This was key to eliminating the possibility of any electron crystals forming because of material imperfections.

The results were impressive. “Our group has been able to make unprecedentedly clean samples that made this work possible,” Yazdani said. “With our microscope we can confirm that the samples are without any atomic imperfection in the graphene atomic lattice or foreign atoms on its surface over regions with hundreds of thousands of atoms.”

To make the pure graphene, the researchers exfoliated two carbon sheets of graphene in a configuration that is called Bernal-stacked bilayer graphene (BLG). They then cooled the sample down to extremely low temperatures—just a fraction of a degree above absolute zero—and applied a magnetic field perpendicular to the sample, which created a two-dimensional electron gas system within the thin layers of graphene.  With this, they could tune the density of the electrons between the two layers.

“In our experiment, we can image the system as we tune the number of the electrons per unit area,” said Yen-Chen Tsui, a graduate student in physics and the first author of the paper. “Just by changing the density, you can initiate this phase transition and find electrons spontaneously form into an ordered crystal.”

This happens, Tsui explained, because at low densities, the electrons are far apart from each other—and they’re situated in a disordered, disorganized fashion. However, as you increase the density, which brings the electrons closer together, their natural repulsive tendencies kick in and they start to form an organized lattice. Then, as you increase the density further, the crystalline phase will melt into an electron liquid.

Minhao He, a postdoctoral researcher and co-first author of the paper, explained this process in greater detail. “There is an inherent repulsion between the electrons,” he said. “They want to push each other away, but in the meantime the electrons cannot be infinitely apart due to the finite density. The result is that they form a closely packed, regularized lattice structure, with each of the localized electron occupying a certain amount of space.”

When this transition formed, the researchers were able to visualize it using the STM. “Our work provides the first direct images of this crystal. We proved the crystal is really there and we can see it,” said Tsui.

However, just visualizing the crystal wasn’t the end of the experiment. A concrete image of the crystal allowed them to distinguish some of the crystal’s characteristics. They discovered that the crystal is triangular in configuration, and that it can be continuously tuned with the density of the particles. This led to the realization that the Wigner crystal is actually quite stable over a very long range, a conclusion that is contrary to what many scientists have surmised.

“By being able to continuously tune its lattice constant, the experiment proved that the crystal structure is the result of the pure repulsion between the electrons,” said Yazdani.

The researchers also discovered several other interesting phenomena that will no doubt warrant further investigation in the future. They found that the location to which each electron is localized in the lattice appears in the images with a certain amount of “blurring,” as if the location is not defined by a point but a range position in which the electrons are confined in the lattice. The paper described this as the “zero-point” motion of electrons, a phenomenon related to the Heisenberg uncertainty principle. The extent of this blurriness reflects the quantum nature of the Wigner crystal.  

”Electrons, even when frozen into a Wigner crystal, should exhibit strong zero-point motion,” said Yazdani. “It turns out this quantum motion covers a third of the distance between them, making the Wigner crystal a novel quantum crystal.”

Yazdani and his team are also examining how the Wigner crystal melts and transitions into other exotic liquid phases of interacting electrons in a magnetic field. The researchers hope to image these phases just as they have imaged the Wigner crystal.  

“Direct observation of a magnetic field-induced Wigner crystal,” by Yen-Chen Tsui, Minhao He, Yuwen Hu, Ethan Lake, Taige Wang, Kenji Watanabe, Takashi Taniguchi, Michael P. Zaletel, and Ali Yazdani was published April 11, 2024 in the journal Nature DOI to come.  

Graduate student Yen-Chen Tsui, postdoctoral research associate Minhao He, and Yuwen Hu, who obtained her Ph.D. from the Princeton Department of Physics in 2023 and is now a postdoctoral fellow at Stanford, all contributed equal to this work. Other collaborators include, at the University of California-Berkeley, theoretical physicists Ethan Lake, Taige Wang, and Professor Michael Zaletel (also a member of the Material Science Division at Lawrence Berkeley National Laboratory), and Kenji Watanabe and Takashi Taniguchi from National Institute for Materials Science and International Center for Materials Nanoarchitectonics, respectively.

The work at Princeton was primarily supported by DOE-BES grant DE-FG02-07ER46419 and the Gordon and Betty Moore Foundation’s EPiQS initiative grants GBMF9469. Other support for the experimental infrastructure at Princeton was provided by NSF-MRSEC through the Princeton Center for Complex Materials NSF6 DMR- 2011750, ARO MURI (W911NF-21-2-0147), and ONR N00012-21-1-2592.

The team also acknowledges the hospitality of the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611, where part of this work was carried out. Work at UC Berkeley was supported by U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract No. DE-AC02-05CH11231, within the van der Waals Heterostructures Program (KCWF16).


An image of a triangular Wigner crystal taken by scanning tunneling microscope. Researchers have unveiled an elusive crystal that is formed purely from the repulsive nature of electrons. Each site (blue circular region) contains a single localized electron. Image by Yen-Chen Tsui and team, Princeton University

CREDIT

Yen-Chen Tsui, Princeton University

 

AI-powered ‘sonar’ on smartglasses tracks gaze, facial expressions



CORNELL UNIVERSITY





ITHACA, N.Y. – Cornell University researchers have developed two technologies that track a person’s gaze and facial expressions through sonar-like sensing. The technology is small enough to fit on commercial smartglasses or virtual reality or augmented reality headsets, yet consumes significantly less power than similar tools using cameras.

Both use speakers and microphones mounted on an eyeglass frame to bounce inaudible soundwaves off the face and pick up reflected signals caused by face and eye movements. One device, GazeTrak, is the first eye-tracking system that relies on acoustic signals. The second, EyeEcho, is the first eyeglass-based system to continuously and accurately detect facial expressions and recreate them through an avatar in real time.

The devices can last for several hours on a smartglass battery and more than a day on a VR headset.

“It’s small, it’s cheap and super low-powered, so you can wear it on smartglasses everyday – it won’t kill your battery,” said Cheng Zhang, assistant professor of information science. Zhang directs the Smart Computer Interfaces for Future Interactions (SciFi) Lab that created the new devices.

“In a VR environment, you want to recreate detailed facial expressions and gaze movements so that you can have better interactions with other users,” said Ke Li, a doctoral student who led the GazeTrak and EyeEcho development.

For GazeTrak, researchers positioned one speaker and four microphones around the inside of each eye frame of a pair of glasses, to bounce and pick up soundwaves from the eyeball and the area around the eyes. The resulting sound signals are fed into a customized deep learning pipeline that uses artificial intelligence to continuously infer the direction of the person’s gaze.

For EyeEcho, one speaker and one microphone is located next to the glasses’ hinges, pointing down to catch skin movement as facial expressions change. The reflected signals are also interpreted using AI.

With this technology, users can have hands-free video calls through an avatar, even in a noisy café or on the street. While some smartglasses have the ability to recognize faces or distinguish between a few specific expressions, currently, none track expressions continuously like EyeEcho.

These two advances have applications beyond enhancing a person’s VR experience. GazeTrak could be used with screen readers to read out portions of text for people with low vision as they peruse a website.

GazeTrak and EyeEcho could also potentially help diagnose or monitor neurodegenerative diseases, like Alzheimer’s and Parkinsons. With these conditions, patients often have abnormal eye movements and less expressive faces, and this type of technology could track the progression of the disease from the comfort of a patient’s home.

Li will present GazeTrak at the Annual International Conference on Mobile Computing and Networking in the fall and EyeEcho at the Association of Computing Machinery CHI conference on Human Factors in Computing Systems in May.

For additional information, see this Cornell Chronicle story.

Media note: Pictures can be viewed and downloaded here: https://cornell.box.com/v/sonarsmartglasses.

-30-


A faster, better way to prevent an AI chatbot from giving toxic responses


Researchers create a curious machine-learning model that finds a wider variety of prompts for training a chatbot to avoid hateful or harmful output



MASSACHUSETTS INSTITUTE OF TECHNOLOGY





CAMBRIDGE, MA – A user could ask ChatGPT to write a computer program or summarize an article, and the AI chatbot would likely be able to generate useful code or write a cogent synopsis. However, someone could also ask for instructions to build a bomb, and the chatbot might be able to provide those, too.

To prevent this and other safety issues, companies that build large language models typically safeguard them using a process called red-teaming. Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.

But this only works effectively if engineers know which toxic prompts to use. If human testers miss some prompts, which is likely given the number of possibilities, a chatbot regarded as safe might still be capable of generating unsafe answers.

Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab used machine learning to improve red-teaming. They developed a technique to train a red-team large language model to automatically generate diverse prompts that trigger a wider range of undesirable responses from the chatbot being tested.

They do this by teaching the red-team model to be curious when it writes prompts, and to focus on novel prompts that evoke toxic responses from the target model.

The technique outperformed human testers and other machine-learning approaches by generating more distinct prompts that elicited increasingly toxic responses. Not only does their method significantly improve the coverage of inputs being tested compared to other automated methods, but it can also draw out toxic responses from a chatbot that had safeguards built into it by human experts.

“Right now, every large language model has to undergo a very lengthy period of red-teaming to ensure its safety. That is not going to be sustainable if we want to update these models in rapidly changing environments. Our method provides a faster and more effective way to do this quality assurance,” says Zhang-Wei Hong, an electrical engineering and computer science (EECS) graduate student in the Improbable AI lab and lead author of a paper on this red-teaming approach.

Hong’s co-authors include EECS graduate students Idan Shenfield, Tsun-Hsuan Wang, and Yung-Sung Chuang; Aldo Pareja and Akash Srivastava, research scientists at the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Pulkit Agrawal, director of Improbable AI Lab and an assistant professor in CSAIL. The research will be presented at the International Conference on Learning Representations.

 

Automated red-teaming

Large language models, like those that power AI chatbots, are often trained by showing them enormous amounts of text from billions of public websites. So, not only can they learn to generate toxic words or describe illegal activities, the models could also leak personal information they may have picked up.

The tedious and costly nature of human red-teaming, which is often ineffective at generating a wide enough variety of prompts to fully safeguard a model, has encouraged researchers to automate the process using machine learning.

Such techniques often train a red-team model using reinforcement learning. This trial-and-error process rewards the red-team model for generating prompts that trigger toxic responses from the chatbot being tested.

But due to the way reinforcement learning works, the red-team model will often keep generating a few similar prompts that are highly toxic to maximize its reward.

For their reinforcement learning approach, the MIT researchers utilized a technique called curiosity-driven exploration. The red-team model is incentivized to be curious about the consequences of each prompt it generates, so it will try prompts with different words, sentence patterns, or meanings.

“If the red-team model has already seen a specific prompt, then reproducing it will not generate any curiosity in the red-team model, so it will be pushed to create new prompts,” Hong says.

During its training process, the red-team model generates a prompt and interacts with the chatbot. The chatbot responds, and a safety classifier rates the toxicity of its response, rewarding the red-team model based on that rating.

 

Rewarding curiosity

The red-team model’s objective is to maximize its reward by eliciting an even more toxic response with a novel prompt. The researchers enable curiosity in the red-team model by modifying the reward signal in the reinforcement learning set up.

First, in addition to maximizing toxicity, they include an entropy bonus that encourages the red-team model to be more random as it explores different prompts. Second, to make the agent curious they include two novelty rewards. One rewards the model based on the similarity of words in its prompts, and the other rewards the model based on semantic similarity. (Less similarity yields a higher reward.)

To prevent the red-team model from generating random, nonsensical text, which can trick the classifier into awarding a high toxicity score, the researchers also added a naturalistic language bonus to the training objective.

With these additions in place, the researchers compared the toxicity and diversity of responses their red-team model generated with other automated techniques. Their model outperformed the baselines on both metrics.

They also used their red-team model to test a chatbot that had been fine-tuned with human feedback so it would not give toxic replies. Their curiosity-driven approach was able to quickly produce 196 prompts that elicited toxic responses from this “safe” chatbot.

“We are seeing a surge of models, which is only expected to rise. Imagine thousands of models or even more and companies/labs pushing model updates frequently. These models are going to be an integral part of our lives and it’s important that they are verified before released for public consumption. Manual verification of models is simply not scalable, and our work is an attempt to reduce the human effort to ensure a safer and trustworthy AI future,” says Agrawal. 

In the future, the researchers want to enable the red-team model to generate prompts about a wider variety of topics. They also want to explore the use of a large language model as the toxicity classifier. In this way, a user could train the toxicity classifier using a company policy document, for instance, so a red-team model could test a chatbot for company policy violations.

“If you are releasing a new AI model and are concerned about whether it will behave as expected, consider using curiosity-driven red-teaming,” says Agrawal.

###

This research is funded, in part, by Hyundai Motor Company, Quanta Computer Inc., the MIT-IBM Watson AI Lab, an Amazon Web Services MLRA research grant, the U.S. Army Research Office, the U.S. Defense Advanced Research Projects Agency Machine Common Sense Program, the U.S. Office of Naval Research, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.

 

Filling in genomic blanks for disease studies works better for some groups than others


A new USC study finds an apparent disparity in the effectiveness of genome-wide association studies concerning a technique that is reliable for those with European or African ancestry as well as for Latinos but serves other populations less well.



KECK SCHOOL OF MEDICINE OF USC




Understanding how genetics affect health is an essential first step toward treating and preventing a host of diseases. New knowledge often comes from genome-wide association studies identifying variations in the genetic code linked with conditions such as cancer and autoimmune disease. The more people’s DNA and health histories that are examined in such research, the more likely genetic and biological insights can be garnered.

However, cost can be a major barrier: Comprehensively sequencing one person’s genome costs about $500 to $1000, a price point often infeasible when applied to several tens of thousands of study participants. So instead, researchers generally focus on key spots where the genetic code tends to vary among different individuals — through genotyping, which costs about $100 per participant. A statistical method called genotype imputation then helps them fill in the genetic blanks based on existing reference panels of fully sequenced genomes.

A new Keck School of Medicine of USC study, supported by the National Institutes of Health and appearing in the American Journal of Human Genetics, identifies a disparity in how well imputation works for different populations. The researchers found that the technique holds up nicely for well-represented groups with European ancestry, as well as for African Americans and Latinos, who have been the subject of recent, concerted efforts to increase representation in sequencing reference panels. However, the researchers found that imputation is far less reliable for other groups, generally doing worse for populations farther away from Europe, except for Africa and Latin America. 

“These global populations are not being imputed as well, meaning that we have a lot more error in filling in missing parts of the genome,” said corresponding author Charleston Chiang, PhD, associate professor of population and public health sciences and associate director at the Keck School of Medicine’s Center for Genetic Epidemiology. “That means the analysis using these imputed data doesn’t work as well. And because researchers filter based on the reliability of imputation, we end up having data for diverse populations with more errors and more holes, leading to less effective study designs.”

Reaching outside of a health science field to examine inequities

Chiang notes that the uniqueness of this study lies in the breadth of the study, where the team evaluated over a hundred global populations for issues with imputation. This has not been previously demonstrated because of the general lack of diversity of available cohorts as well as in reference panels of fully sequenced genomes. This presented a hurdle for understanding how well diverse groups fare with imputation in genetic epidemiology studies. So the research team took a unique approach, borrowing genetic datasets from population genetics, a related field focused on understanding the history and evolution of a wide variety of populations, with less of a focus on disease. 

In all, the scientists combined genomic sequencing data from 23 studies including more than 43,000 people from 123 distinct global populations. They matched each population with a control group of European ancestry and used a standard metric that doesn’t require full genomic sequences — which is normally the case in genome-wide association studies — to compare the reliability of imputation.  

Imputation for populations based in places such as Papua New Guinea, Thailand, Vietnam and Saudi Arabia was substantially less accurate than for populations of European descent. Chiang and his colleagues also plotted the relative reliability of imputation for different groups on a world map that is available online. Imputation for populations based in Asia, Australia, New Zealand and the Pacific Islands generally showed less accuracy.

The team also compared the main metric for the reliability of imputation used to arrive at these findings with a better metric that only works when full sequencing data is available. They found that the main metric is biased so that it overestimates the accuracy of imputation for populations other than people of European ancestry. This suggests that the flaws in imputation are more serious still than indicated by the researchers’ results.

Potential steps to make genome-wide association studies more equitable

The solution for the disparity highlighted in the study is straightforward, yet far from simple to achieve.

“We need to sequence more, and be more inclusive in the individuals who participate in studies,” said Chiang, who also holds an appointment in quantitative and computational biology at USC Dornsife College and is a member of USC Norris Comprehensive Cancer Center

One promising sign is that genomic sequencing has become more affordable in recent years and is expected to continue to do so. But cost isn’t the only concern that needs to be addressed. Efforts are needed to earn the trust from diverse communities so they are not hesitant to participate. In some cases, more diversity can complicate genome-wide association studies, particularly in smaller studies, even confounding their findings if the diversity is not properly accounted for or characterized. This creates pressure for scientists to exclude a smaller subset of populations in their data and choose from groups with more members. 

Chiang advocates for a sort of balance.

“As the studies get bigger and bigger, the way that scientists view and analyze these data needs to evolve toward looking at genetic ancestry as more of a continuum,” he said. “If we can start to view everyone as related and branching off the same genetic tree at different places, according to their history, we can incorporate more people and more diversity. 

“Of course, there are valuable reasons to study discrete populations,” he continued. “Group identity can be useful to maintain, for example when studying the social determinants of health that affect what people experience in their daily lives. We need to continue studying particular populations in isolation, but in the long term, we need to be able to reconcile between the two approaches.”

The study’s first author, USC undergraduate Jordan Cahoon, hopes that by beginning to quantify disparities baked into genome-wide association studies, the team’s work will influence future solutions.

“It’s important to understand the weaknesses in the field in terms of equity and fairness,” said Cahoon, a graduating senior majoring in computer science at the USC Viterbi School of Engineering. “I’m hoping that this study will be a good resource for scientists, so they can see how well the populations they’re sequencing are doing in comparison to others.”

About this study

Other co-authors are Xinyue Rui, Echo Tang, Christopher Simons, Jalen Langie, Minhui Chen and Ying-Chu Lo, all of USC.

The study received support from the National Institutes of Health (R35GM14278), a USC Viterbi Merit Fellowship and a USC Provost’s Undergrad Research Fellowship.