Tuesday, January 16, 2024

New study uses machine learning to bridge the reality gap in quantum devices



Peer-Reviewed Publication

UNIVERSITY OF OXFORD





FOR IMMEDIATE RELEASE TUESDAY 9 JANUARY 2024

A study led by the University of Oxford has used the power of machine learning to overcome a key challenge affecting quantum devices. For the first time, the findings reveal a way to close the ‘reality gap’: the difference between predicted and observed behaviour from quantum devices. The results have been published in Physical Review X.

Quantum computing could supercharge a wealth of applications, from climate modelling and financial forecasting, to drug discovery and artificial intelligence. But this will require effective ways to scale and combine individual quantum devices (also called qubits). A major barrier against this is inherent variability: where even apparently identical units exhibit different behaviours.

Functional variability is presumed to be caused by nanoscale imperfections in the materials that quantum devices are made from. Since there is no way to measure these directly, this internal disorder cannot be captured in simulations, leading to the gap in predicted and observed outcomes.

To address this, the research group used a “physics-informed” machine learning approach to infer these disorder characteristics indirectly. This was based on how the internal disorder affected the flow of electrons through the device.

Lead researcher Associate Professor Natalia Ares (Department of Engineering Science, University of Oxford) said: ‘As an analogy, when we play “crazy golf” the ball may enter a tunnel and exit with a speed or direction that doesn’t match our predictions. But with a few more shots, a crazy golf simulator, and some machine learning, we might get better at predicting the ball’s movements and narrow the reality gap.’

The researchers measured the output current for different voltage settings across an individual quantum dot device. The data was input into a simulation which calculated the difference between the measured current with the theoretical current if no internal disorder was present. By measuring the current at many different voltage settings, the simulation was constrained to find an arrangement of internal disorder that could explain the measurements at all voltage settings. This approach used a combination of mathematical and statistical approaches coupled with deep learning.

Associate Professor Ares added: ‘In the crazy golf analogy, it would be equivalent to placing a series of sensors along the tunnel, so that we could take measurements of the ball’s speed at different points. Although we still can’t see inside the tunnel, we can use the data to inform better predictions of how the ball will behave when we take the shot.’

Not only did the new model find suitable internal disorder profiles to describe the measured current values, it was also able to accurately predict voltage settings required for specific device operating regimes.

Crucially, the model provides a new method to quantify the variability between quantum devices. This could enable more accurate predictions of how devices will perform, and also help to engineer optimum materials for quantum devices. It could inform compensation approaches to mitigate the unwanted effects of material imperfections in quantum devices.

Co-author David Craig, a PhD student at the Department of Materials, University of Oxford, added, ‘Similar to how we cannot observe black holes directly but we infer their presence from their effect on surrounding matter, we have used simple measurements as a proxy for the internal variability of nanoscale quantum devices. Although the real device still has greater complexity than the model can capture, our study has demonstrated the utility of using physics-aware machine learning to narrow the reality gap.’

Notes to editors:

For media enquiries and interview requests, contact Dr Natalia Ares: natalia.ares@eng.ox.ac.uk

The study ‘Bridging the reality gap in quantum devices with physics-aware machine learning’ has been published in Physical Review Xhttps://journals.aps.org/prx/abstract/10.1103/PhysRevX.14.011001

About the University of Oxford

Oxford University has been placed number 1 in the Times Higher Education World University Rankings for the eighth year running, and ​number 3 in the QS World Rankings 2024. At the heart of this success are the twin-pillars of our ground-breaking research and innovation and our distinctive educational offer.

Oxford is world-famous for research and teaching excellence and home to some of the most talented people from across the globe. Our work helps the lives of millions, solving real-world problems through a huge network of partnerships and collaborations. The breadth and interdisciplinary nature of our research alongside our personalised approach to teaching sparks imaginative and inventive insights and solutions.

Through its research commercialisation arm, Oxford University Innovation, Oxford is the highest university patent filer in the UK and is ranked first in the UK for university spinouts, having created more than 300 new companies since 1988. Over a third of these companies have been created in the past five years. The university is a catalyst for prosperity in Oxfordshire and the United Kingdom, contributing £15.7 billion to the UK economy in 2018/19, and supports more than 28,000 full time jobs.


Accelerating how new drugs are made with machine learning


Peer-Reviewed Publication

UNIVERSITY OF CAMBRIDGE




Researchers have developed a platform that combines automated experiments with AI to predict how chemicals will react with one another, which could accelerate the design process for new drugs.

Predicting how molecules will react is vital for the discovery and manufacture of new pharmaceuticals, but historically this has been a trial-and-error process, and the reactions often fail. To predict how molecules will react, chemists usually simulate electrons and atoms in simplified models, a process which is computationally expensive and often inaccurate.

Now, researchers from the University of Cambridge have developed a data-driven approach, inspired by genomics, where automated experiments are combined with machine learning to understand chemical reactivity, greatly speeding up the process. They’ve called their approach, which was validated on a dataset of more than 39,000 pharmaceutically relevant reactions, the chemical ‘reactome’.

Their results, reported in the journal Nature Chemistry, are the product of a collaboration between Cambridge and Pfizer.

“The reactome could change the way we think about organic chemistry,” said Dr Emma King-Smith from Cambridge’s Cavendish Laboratory, the paper’s first author. “A deeper understanding of the chemistry could enable us to make pharmaceuticals and so many other useful products much faster. But more fundamentally, the understanding we hope to generate will be beneficial to anyone who works with molecules.”

The reactome approach picks out relevant correlations between reactants, reagents, and performance of the reaction from the data, and points out gaps in the data itself. The data is generated from very fast, or high throughput, automated experiments.

“High throughput chemistry has been a game-changer, but we believed there was a way to uncover a deeper understanding of chemical reactions than what can be observed from the initial results of a high throughput experiment,” said King-Smith.

“Our approach uncovers the hidden relationships between reaction components and outcomes,” said Dr Alpha Lee, who led the research. “The dataset we trained the model on is massive – it will help bring the process of chemical discovery from trial-and-error to the age of big data.”

In a related paper, published in Nature Communications, the team developed a machine learning approach that enables chemists to introduce precise transformations to pre-specified regions of a molecule, enabling faster drug design.

The approach allows chemists to tweak complex molecules – like a last-minute design change – without having to make them from scratch. Making a molecule in the lab is typically a multi-step process, like building a house. If chemists want to vary the core of a molecule, the conventional way is to rebuild the molecule, like knocking the house down and rebuilding from scratch. However, core variations are important to medicine design.

A class of reactions, known as late-stage functionalisation reactions, attempts to directly introduce chemical transformations to the core, avoiding the need to start from scratch. However, it is challenging to make late-stage functionalisation selective and controlled – there are typically many regions of the molecules that can react, and it is difficult to predict the outcome.

“Late-stage functionalisations can yield unpredictable results and current methods of modelling, including our own expert intuition, isn't perfect,” said King-Smith. “A more predictive model would give us the opportunity for better screening.”

The researchers developed a machine learning model that predicts where a molecule would react, and how the site of reaction vary as a function of different reaction conditions. This enables chemists to find ways to precisely tweak the core of a molecule.

“We pretrained the model on a large body of spectroscopic data – effectively teaching the model general chemistry – before fine-tuning it to predict these intricate transformations,” said King-Smith. This approach allowed the team to overcome the limitation of low data: there are relatively few late-stage functionalisation reactions reported in the scientific literature. The team experimentally validated the model on a diverse set of drug-like molecules and was able to accurately predict the sites of reactivity under different conditions.

“The application of machine learning to chemistry is often throttled by the problem that the amount of data is small compared to the vastness of chemical space,” said Lee. “Our approach – designing models that learn from large datasets that are similar but not the same as the problem we are trying to solve – resolve this fundamental low-data challenge and could unlock advances beyond late stage functionalisation.”  

The research was supported in part by Pfizer and the Royal Society.

No comments:

Post a Comment