Tuesday, January 16, 2024

 

What happens to our online activity over the switches to and from Daylight Saving Time?


Peer-Reviewed Publication

UNIVERSITY OF SURREY




Researchers noticed that after switching to DST, certain Google searches took place up to an hour earlier than usual. On the other hand, when clocks went back to standard time in autumn, these searches tended to occur later. 

 The shift in search times varied across different search categories. Notably, the time of searches around sleep and health varied by less than 60 minutes over DST changes, hinting at a strong and robust role of the internal body clock in driving them. 

 Professor Sara Montagnese, co-author of the study from the University of Surrey, said: 

"Our research calls for wider discussions about the health and wellbeing impact of DST and the complex relationship between our internal body clock and the time constraints imposed by society, which are collectively known as the “social clock”, of which DST is part." 

 Researchers analysed Google Trends data from Italy, covering 2015 to 2020. The team examined the relative search volume for 26 keywords, grouped into three categories: 

  • Sleep/health-related: includes search terms related to sleep patterns, sleep disorders, and overall health concerns. Terms such as “insomnia” and “melatonin” fall under this category. 

  • Medication: this category includes terms concerning drugs and pharmaceuticals. Terms like “painkiller” and Xanax” are included in this category.  

  • Non-sleep/health-related: this category covers terms that are unrelated to sleep or health. Examples include “spa” and “taxi”. 

The study has been published in the Journal of Circadian Rhythms. 


Alzheimer Europe adopts position on anti-amyloid therapies for Alzheimer’s disease, issuing a call to action for timely, safe and equitable access

In a new position paper, Alzheimer Europe calls for concrete actions to enable timely, safe and equitable access to anti-amyloid drugs, for patients who are most likely to benefit from these innovative new treatments for Alzheimer's disease

Reports and Proceedings

ALZHEIMER EUROPE

Luxembourg, 9 January 2024 – In a new position paper, and following engagement with its national members and the European Working Group of People with Dementia (EWGPWD), Alzheimer Europe calls for concrete actions to enable timely, safe and equitable access to anti-amyloid drugs, for patients who are most likely to benefit from these innovative new treatments for Alzheimer’s disease (AD).

The growing prevalence and impact of AD has catalysed huge investments in research on its causes, diagnosis, treatment and care. After many high-profile failures, recent clinical trials of anti-amyloid drugs have marked a turning point for the field, leading to the approval of the first disease-modifying therapies for AD in the US. European regulators are currently evaluating whether there is sufficient evidence to approve these drugs for patients with mild cognitive impairment (MCI) or mild dementia due to AD.

Anti-amyloid drugs represent a new hope for people with AD. Classed as disease-modifying therapies, drugs such as lecanemab and donanemab can slow the progressive, clinical decline associated with AD, with the potential to give patients more time in the less symptomatic stages of the disease. However, the benefits and risks of initiating treatment with anti-amyloid drugs are multifaceted and complex, as are the patterns of evidence and effectiveness from clinical trials.

Access to anti-amyloid drugs hinges entirely on a timely and accurate diagnosis of AD, in the MCI or mild dementia stages, with biomarker confirmation of AD pathology. However, diagnosing AD remains challenging in clinical practice, excluding many from accessing patient-centred support, care and treatments. Currently, European healthcare systems are inadequately resourced to provide a timely diagnosis, let alone equitable access to anti-amyloid drugs, for all people with early AD who could benefit from treatment.

The Alzheimer Europe position paper addresses questions of anti-amyloid drug efficacy, safety and cost, highlighting three priority areas to ensure equitable access to these innovative treatments: effective communication of risks and benefits; an accurate, timely diagnosis; and healthcare systems preparedness. To address these challenges, Alzheimer Europe calls for concrete actions from industry, regulators, payers, healthcare systems and governments. These include:

  • Accessible, inclusive communication of the benefits and risks of anti-amyloid drugs, so patients can weigh the potential slowing of clinical decline against the side effects, financial costs and logistical burdens of treatment;
  • The adoption of realistic, sustainable pricing policies for anti-amyloid drugs, coupled with clear reimbursement frameworks that reflect the true value of treatment for patients and society, without impacting the coverage of existing therapies that are hugely valued by people with dementia and their carers/supporters;
  • Development of patient registries for long-term collection of real-world evidence on the efficacy and safety of anti-amyloid drugs, including data on outcomes that are meaningful for patients and their carers/supporters;
  • Investment in infrastructures for diagnosis and treatment, with expansion of workforce capacity and capability supported by clear guidance on drug eligibility, and parameters for treatment initiation, safety monitoring and discontinuation;
  • Implementation of biomarker-guided clinical pathways which support the diagnosis and treatment of AD in the early stages of disease, integrated alongside existing pathways focused on managing the symptoms of later-stage dementia;
  • Continued investment in the development of diagnostics and treatments for other causes and stages of dementia, as well as support and care services that can help people live well with dementia at all stages.

Commenting on the position paper, Alzheimer Europe’s Executive Director, Jean Georges, stated:

“If anti-amyloid drugs are approved by European regulators, these innovative treatments should be accessible for patients most likely to benefit, with clear protocols to exclude those most likely to suffer serious side effects. Fair pricing policies are crucial, to support broad coverage and reimbursement. The development of disease-modifying therapies for early AD marks a turning point in the fight against the disease. However, the needs of people with more advanced AD, or less common forms of dementia, must not be overlooked. Research into other treatment options is essential, including symptomatic treatment for people with more advanced dementia, and preventative approaches throughout the lifecourse."

Our full position paper can be accessed on the Alzheimer Europe website:

https://www.alzheimer-europe.org/policy/positions/alzheimer-europe-position-anti-amyloid-therapies

 

For further information, contact:

Jean Georges, Executive Director of Alzheimer Europe, 14, rue Dicks, L-1417 Luxembourg, Tel.: +352-29 79 70, Fax: +352-29 79 72, jean.georges@alzheimer-europe.org  www.alzheimer-europe.org

Notes to editors:

Alzheimer Europe is the umbrella organisation of national Alzheimer associations and currently has 42 member organisations in 37 European countries. Our mission is to change perceptions, policy and practice in order to improve the lives of people affected by dementia.

The European Working Group of People with Dementia was launched by Alzheimer Europe and its member associations in 2012. The group is composed entirely of people with dementia, who are nominated by their national Alzheimer associations. They work to ensure that the activities, projects and meetings of Alzheimer Europe duly reflect the priorities and views of people living with dementia. The Chairperson is also an ex-officio member on the Board of Alzheimer Europe with full voting rights.


Different biological variants discovered in Alzheimer's disease


Research from Amsterdam UMC could be essential in the evaluation of future medication

Peer-Reviewed Publication

AMSTERDAM UNIVERSITY MEDICAL CENTER




Dutch scientists have discovered five biological variants of Alzheimer's disease, which may require different treatment. As a result, previously tested drugs may incorrectly appear to be ineffective or only minimally effective. This is the conclusion of researcher Betty Tijms and colleagues from Alzheimer Center Amsterdam, Amsterdam UMC and Maastricht University. The research results will be published on 9 January in Nature Aging.  

In those with Alzheimer's disease, the amyloid and tau protein clump in the brain. In addition to these clumps, other biological processes such as inflammation and nerve cells growth are also involved. Using new techniques, the researchers have been able to measure these other processes in the cerebrospinal fluid of patients with amyloid and tau clumps. 

Betty Tijms and Pieter Jelle Visser examined 1058 proteins in the cerebrospinal fluid of 419 people with Alzheimer's disease. They found that there are five biological variants within this group. The first variant is characterized by increased amyloid production. In a second type, the blood-brain barrier is disrupted and there is a reduced amyloid production and less nerve cells growth. Furthermore, the variants differ in the degree of protein synthesis, the functioning of the immune system, and the functioning of the organ that produces cerebrospinal fluid. Patients with different Alzheimer's variants also showed differences in other aspects of the disease. For example, the researchers found a faster course of the disease in certain subgroups. 

The findings are of great importance for drug research. It means that a drug could only work in one variant of Alzheimer's disease. For example, medication that inhibits amyloid production may work in the variant with increased amyloid production but may be harmful in the variant with decreased amyloid production. It is also possible that patients with one variant have a higher risk of side effects, while that risk is much lower with other variants. The next step for the research team is to show that the Alzheimer's variants do indeed react differently to medicines, so that we can treat everyone with appropriate medicines in the future. 


Queen Mary University of London study reveals genetic legacy of racial and gender hierarchies


Peer-Reviewed Publication

QUEEN MARY UNIVERSITY OF LONDON




Researchers from Queen Mary University of London have revealed how sociocultural factors, in addition to geography, play a significant role in shaping the genetic diversity of modern societies. The research published in eLife employed deep learning to unravel the intricate patterns of ancestry-related sex bias and assortative mating, revealing how societal structures have shaped the genetic diversity of this region. 

"Our study sheds light on how social stratification has woven its threads into the genetic fabric of admixed populations in the Americas," remarked Dr Matteo Fumagalli, Senior Lecturer in Genetics at Queen Mary University of London. "For the first time, we used a mating model where the individual proportions of the genome inherited from Native American, European and sub-Saharan African ancestries dictate the mating probabilities." 

The researchers meticulously analysed genetic data from over hundreds of individuals across the Americas, revealing striking differences in mating patterns between Latin America and North America. In Latin America, the proportion of Native American ancestry for both men and women was found to significantly influence mating probabilities, shaping the genetic composition of the population. Conversely, in North America, the sub-Saharan African ancestry played a more prominent role in determining mating choices. 

Venturing into the depths of history, the researchers delved into the historical context, investigating how population stratification in the Americas was shaped by racial and gender hierarchies that have constrained the admixture processes since the European colonization and the subsequent Atlantic slave trade. Their findings reveal that racial stratification intensified gender inequalities and that historically enforced mixing between social classes led to a dilution of non-European ancestry while not diminishing discrimination. 

“The study's findings hold profound implications for our understanding of the historical and genetic tapestry of the Americas. They illuminate how social stratification, deeply rooted in racial and gender hierarchies, has left an indelible mark on the genetic diversity of these populations and shaped the genetic contours of these populations, leaving an enduring legacy,” commented Dr Matteo Fumagalli. 

This study serves as a testament to the transformative power of AI in unraveling complex biological questions. By developing this deep learning model, the researchers were able to quantify the extent to which ancestry-driven mating has shaped the genomes of admixed societies. AI has allowed researchers to peer into the intricate details of the genetic landscape, revealing the profound impact of social forces on human diversity. 

Looking beyond the Americas, the researchers envision that their approach can be applied to other admixed populations worldwide, furthering our understanding of how sociocultural factors have intertwined with geography to shape human genetic diversity. "Our approach has the potential to unlock the secrets of other admixed populations worldwide, furthering our understanding of how sociocultural factors have impacted the genetic tapestry of modern societies," stated Matteo Fumagalli. 

New study uses machine learning to bridge the reality gap in quantum devices



Peer-Reviewed Publication

UNIVERSITY OF OXFORD





FOR IMMEDIATE RELEASE TUESDAY 9 JANUARY 2024

A study led by the University of Oxford has used the power of machine learning to overcome a key challenge affecting quantum devices. For the first time, the findings reveal a way to close the ‘reality gap’: the difference between predicted and observed behaviour from quantum devices. The results have been published in Physical Review X.

Quantum computing could supercharge a wealth of applications, from climate modelling and financial forecasting, to drug discovery and artificial intelligence. But this will require effective ways to scale and combine individual quantum devices (also called qubits). A major barrier against this is inherent variability: where even apparently identical units exhibit different behaviours.

Functional variability is presumed to be caused by nanoscale imperfections in the materials that quantum devices are made from. Since there is no way to measure these directly, this internal disorder cannot be captured in simulations, leading to the gap in predicted and observed outcomes.

To address this, the research group used a “physics-informed” machine learning approach to infer these disorder characteristics indirectly. This was based on how the internal disorder affected the flow of electrons through the device.

Lead researcher Associate Professor Natalia Ares (Department of Engineering Science, University of Oxford) said: ‘As an analogy, when we play “crazy golf” the ball may enter a tunnel and exit with a speed or direction that doesn’t match our predictions. But with a few more shots, a crazy golf simulator, and some machine learning, we might get better at predicting the ball’s movements and narrow the reality gap.’

The researchers measured the output current for different voltage settings across an individual quantum dot device. The data was input into a simulation which calculated the difference between the measured current with the theoretical current if no internal disorder was present. By measuring the current at many different voltage settings, the simulation was constrained to find an arrangement of internal disorder that could explain the measurements at all voltage settings. This approach used a combination of mathematical and statistical approaches coupled with deep learning.

Associate Professor Ares added: ‘In the crazy golf analogy, it would be equivalent to placing a series of sensors along the tunnel, so that we could take measurements of the ball’s speed at different points. Although we still can’t see inside the tunnel, we can use the data to inform better predictions of how the ball will behave when we take the shot.’

Not only did the new model find suitable internal disorder profiles to describe the measured current values, it was also able to accurately predict voltage settings required for specific device operating regimes.

Crucially, the model provides a new method to quantify the variability between quantum devices. This could enable more accurate predictions of how devices will perform, and also help to engineer optimum materials for quantum devices. It could inform compensation approaches to mitigate the unwanted effects of material imperfections in quantum devices.

Co-author David Craig, a PhD student at the Department of Materials, University of Oxford, added, ‘Similar to how we cannot observe black holes directly but we infer their presence from their effect on surrounding matter, we have used simple measurements as a proxy for the internal variability of nanoscale quantum devices. Although the real device still has greater complexity than the model can capture, our study has demonstrated the utility of using physics-aware machine learning to narrow the reality gap.’

Notes to editors:

For media enquiries and interview requests, contact Dr Natalia Ares: natalia.ares@eng.ox.ac.uk

The study ‘Bridging the reality gap in quantum devices with physics-aware machine learning’ has been published in Physical Review Xhttps://journals.aps.org/prx/abstract/10.1103/PhysRevX.14.011001

About the University of Oxford

Oxford University has been placed number 1 in the Times Higher Education World University Rankings for the eighth year running, and ​number 3 in the QS World Rankings 2024. At the heart of this success are the twin-pillars of our ground-breaking research and innovation and our distinctive educational offer.

Oxford is world-famous for research and teaching excellence and home to some of the most talented people from across the globe. Our work helps the lives of millions, solving real-world problems through a huge network of partnerships and collaborations. The breadth and interdisciplinary nature of our research alongside our personalised approach to teaching sparks imaginative and inventive insights and solutions.

Through its research commercialisation arm, Oxford University Innovation, Oxford is the highest university patent filer in the UK and is ranked first in the UK for university spinouts, having created more than 300 new companies since 1988. Over a third of these companies have been created in the past five years. The university is a catalyst for prosperity in Oxfordshire and the United Kingdom, contributing £15.7 billion to the UK economy in 2018/19, and supports more than 28,000 full time jobs.


Accelerating how new drugs are made with machine learning


Peer-Reviewed Publication

UNIVERSITY OF CAMBRIDGE




Researchers have developed a platform that combines automated experiments with AI to predict how chemicals will react with one another, which could accelerate the design process for new drugs.

Predicting how molecules will react is vital for the discovery and manufacture of new pharmaceuticals, but historically this has been a trial-and-error process, and the reactions often fail. To predict how molecules will react, chemists usually simulate electrons and atoms in simplified models, a process which is computationally expensive and often inaccurate.

Now, researchers from the University of Cambridge have developed a data-driven approach, inspired by genomics, where automated experiments are combined with machine learning to understand chemical reactivity, greatly speeding up the process. They’ve called their approach, which was validated on a dataset of more than 39,000 pharmaceutically relevant reactions, the chemical ‘reactome’.

Their results, reported in the journal Nature Chemistry, are the product of a collaboration between Cambridge and Pfizer.

“The reactome could change the way we think about organic chemistry,” said Dr Emma King-Smith from Cambridge’s Cavendish Laboratory, the paper’s first author. “A deeper understanding of the chemistry could enable us to make pharmaceuticals and so many other useful products much faster. But more fundamentally, the understanding we hope to generate will be beneficial to anyone who works with molecules.”

The reactome approach picks out relevant correlations between reactants, reagents, and performance of the reaction from the data, and points out gaps in the data itself. The data is generated from very fast, or high throughput, automated experiments.

“High throughput chemistry has been a game-changer, but we believed there was a way to uncover a deeper understanding of chemical reactions than what can be observed from the initial results of a high throughput experiment,” said King-Smith.

“Our approach uncovers the hidden relationships between reaction components and outcomes,” said Dr Alpha Lee, who led the research. “The dataset we trained the model on is massive – it will help bring the process of chemical discovery from trial-and-error to the age of big data.”

In a related paper, published in Nature Communications, the team developed a machine learning approach that enables chemists to introduce precise transformations to pre-specified regions of a molecule, enabling faster drug design.

The approach allows chemists to tweak complex molecules – like a last-minute design change – without having to make them from scratch. Making a molecule in the lab is typically a multi-step process, like building a house. If chemists want to vary the core of a molecule, the conventional way is to rebuild the molecule, like knocking the house down and rebuilding from scratch. However, core variations are important to medicine design.

A class of reactions, known as late-stage functionalisation reactions, attempts to directly introduce chemical transformations to the core, avoiding the need to start from scratch. However, it is challenging to make late-stage functionalisation selective and controlled – there are typically many regions of the molecules that can react, and it is difficult to predict the outcome.

“Late-stage functionalisations can yield unpredictable results and current methods of modelling, including our own expert intuition, isn't perfect,” said King-Smith. “A more predictive model would give us the opportunity for better screening.”

The researchers developed a machine learning model that predicts where a molecule would react, and how the site of reaction vary as a function of different reaction conditions. This enables chemists to find ways to precisely tweak the core of a molecule.

“We pretrained the model on a large body of spectroscopic data – effectively teaching the model general chemistry – before fine-tuning it to predict these intricate transformations,” said King-Smith. This approach allowed the team to overcome the limitation of low data: there are relatively few late-stage functionalisation reactions reported in the scientific literature. The team experimentally validated the model on a diverse set of drug-like molecules and was able to accurately predict the sites of reactivity under different conditions.

“The application of machine learning to chemistry is often throttled by the problem that the amount of data is small compared to the vastness of chemical space,” said Lee. “Our approach – designing models that learn from large datasets that are similar but not the same as the problem we are trying to solve – resolve this fundamental low-data challenge and could unlock advances beyond late stage functionalisation.”  

The research was supported in part by Pfizer and the Royal Society.

 

Towards more accurate 3D object detection for robots and self-driving cars


Researchers have developed a network that combines 3D LiDAR and 2D image data to enable a more robust detection of small objects.


Peer-Reviewed Publication

RITSUMEIKAN UNIVERSITY

A new network for 3D object detection. 

IMAGE: 

THE PROPOSED MODEL ADOPTS INNOVATIVE STRATEGIES THAT ENABLE IT TO ACCURATELY COMBINE 3D LIDAR DATA WITH 2D IMAGES, LEADING TO A SIGNIFICANTLY BETTER PERFORMANCE THAN STATE-OF-THE-ART MODELS FOR SMALL TARGET DETECTION, EVEN UNDER ADVERSE WEATHER CONDITIONS.

view more 

CREDIT: HIROYUKI TOMIYAMA FROM RITSUMEIKAN UNIVERSITY




Robotics and autonomous vehicles are among the most rapidly growing domains in the technological landscape, with the potential to make work and transportation safer and more efficient. Since both robots and self-driving cars need to accurately perceive their surroundings, 3D object detection methods are an active area of study. Most 3D object detection methods employ LiDAR sensors to create 3D point clouds of their environment. Simply put, LiDAR sensors use laser beams to rapidly scan and measure the distances of objects and surfaces around the source. However, using LiDAR data alone can lead to errors due to the high sensitivity of LiDAR to noise, especially in adverse weather conditions like during rainfall.

To tackle this issue, scientists have developed multi-modal 3D object detection methods that combine 3D LiDAR data with 2D RGB images taken by standard cameras. While the fusion of 2D images and 3D LiDAR data leads to more accurate 3D detection results, it still faces its own set of challenges, with accurate detection of small objects remaining difficult. The problem mainly lies in properly aligning the semantic information extracted independently from the 2D and 3D datasets, which is hard due to issues such as imprecise calibration or occlusion.

Against this backdrop, a research team led by Professor Hiroyuki Tomiyama from Ritsumeikan University, Japan, has developed an innovative approach to make multi-modal 3D object detection more accurate and robust. The proposed scheme, called “Dynamic Point-Pixel Feature Alignment Network” (DPPFA−Net), is described in their paper published in IEEE Internet of Things Journal on 3 November 2023.

The model comprises an arrangement of multiple instances of three novel modules: the Memory-based Point-Pixel Fusion (MPPF) module, the Deformable Point-Pixel Fusion (DPPF) module, and the Semantic Alignment Evaluator (SAE) module. The MPPF module is tasked with performing explicit interactions between intra-modal features (2D with 2D and 3D with 3D) and cross-modal features (2D with 3D). The use of the 2D image as a memory bank reduces the difficulty in network learning and makes the system more robust against noise in 3D point clouds. Moreover, it promotes the use of more comprehensive and discriminative features.

In contrast, the DPPF module performs interactions only at pixels in key positions, which are determined via a smart sampling strategy. This allows for feature fusion in high resolutions at a low computational complexity. Finally, the SAE module helps ensure semantic alignment between both data representations during the fusion process, which mitigates the issue of feature ambiguity.

The researchers tested DPPFA−Net by comparing it to the top performers for the widely used KITTI Vision Benchmark. Notably, the proposed network achieved average precision improvements as high as 7.18% under different noise conditions. To further test the capabilities of their model, the team created a new noisy dataset by introducing artificial multi-modal noise in the form of rainfall to the KITTI dataset. The results show that the proposed network performed better than existing models not only in the face of severe occlusions but also under various levels of adverse weather conditions. “Our extensive experiments on the KITTI dataset and challenging multi-modal noisy cases reveal that DPPFA-Net reaches a new state-of-the-art,” remarks Prof. Tomiyama.

Notably, there are various ways in which accurate 3D object detection methods could improve our lives. Self-driving cars, which rely on such techniques, have the potential to reduce accidents and improve traffic flow and safety. Furthermore, the implications in the field of robotics should not be understated. “Our study could facilitate a better understanding and adaptation of robots to their working environments, allowing a more precise perception of small targets,” explains Prof. Tomiyama. “Such advancements will help improve the capabilities of robots in various applications.” Another use for 3D object detection networks is the pre-labeling of raw data for deep-learning perception systems. This would greatly reduce the cost of manual annotation, accelerating developments in the field.

Overall, this study is a step in the right direction towards making autonomous systems more perceptive and assisting us better with human activities.

 

***

 

Reference

 

DOI: https://doi.org/10.1109/JIOT.2023.3329884

 

About Ritsumeikan University, Japan
Ritsumeikan University is one of the most prestigious private universities in Japan. Its main campus is in Kyoto, where inspiring settings await researchers. With an unwavering objective to generate social symbiotic values and emergent talents, it aims to emerge as a next-generation research university. It will enhance researcher potential by providing support best suited to the needs of young and leading researchers, according to their career stage. Ritsumeikan University also endeavors to build a global research network as a “knowledge node” and disseminate achievements internationally, thereby contributing to the resolution of social/humanistic issues through interdisciplinary research and social implementation.

Website: http://en.ritsumei.ac.jp/

Ritsumeikan University Research Report: https://www.ritsumei.ac.jp/research/radiant/eng/

 

About Professor Hiroyuki Tomiyama from Ritsumeikan University, Japan
Professor Hiroyuki Tomiyama received B.E., M.E., and D.E. degrees in computer science from Kyushu University in 1994, 1996, and 1999, respectively. He joined the College of Science and Engineering at Ritsumeikan University in 2010, where he works as a Full Professor. He specializes in embedded and cyber-physical systems, autonomous drones, biochip synthesis, and the automation and optimization of electronic designs. He has published over 110 papers on these subjects as well as several books.

 

Funding information
This work is partly supported by JSPS KAKENHI Grant Number 20K23333 and partly commissioned by NEDO (Project Number JPNP22006).

P3

New CRISPR Center brings hope for rare and deadly genetic diseases


Business Announcement

UNIVERSITY OF CALIFORNIA - SAN FRANCISCO




Children and adults with rare, deadly genetic diseases have fresh hope for curative therapies, thanks to a new collaboration between the Innovative Genomics Institute (IGI) and Danaher Corporation, a global life sciences and diagnostics innovator. 
 
The new Danaher-IGI Beacon for CRISPR Cures center will use genome editing to address potentially hundreds of diseases, including rare genetic disorders that have no cure. The goal is to ensure treatments can be developed and brought to patients more quickly and efficiently.   
 
The IGI comprises genetics researchers and clinician experts from three University of California campuses: UCSF, UCLA and UC Berkeley, where the institute is housed, as well as other research institutions. Danaher will provide tools, reagents, resources and expertise to accelerate preclinical and clinical development and establish new standards for safety and efficacy. 
 
The center will work first on CRISPR treatments for two genetic defects of the immune system: familial hemophagocytic lymphohistiocytosis (HLH), which causes immune cells to become overactive, damaging tissues and organs throughout the body; and Artemis-deficient severe combined immunodeficiency (ART-SCID), in which T and B lymphocytes fail to mature, making infants vulnerable to fatal infections.  
 
The standard treatment for both conditions, a bone marrow transplant, is inadequate due to frequent complications. 
 
“With CRISPR, we can speed up the development of improved therapies that can reach all the patients who need them,” said Jennifer Puck, MD, a pediatrics professor who directs the UCSF Jeffrey Modell Diagnostic Center for Primary Immunodeficiencies. “All patients deserve a sense of urgency – including those with rare diseases, many of whom are children.” 

Since the CRISPR platform being created at IGI could, in theory, be reprogrammed to address any gene mutation, the goal is to use treatments for HLH and ART-SCID as models to develop a scalable approach from which new medicine for other genetic diseases can be rapidly developed.  
 
“The unique nature of CRISPR makes it ideal for developing and deploying a platform capability for CRISPR cures on demand,” said Fyodor Urnov, IGI’s Director of Technology and Translation, who is overseeing the project along with Doudna and IGI Executive Director Brad Ringeisen. “Danaher and the IGI are in a unique position to potentially create a first-of-its-kind CRISPR cures ‘cookbook’ that could be used by any team wishing to take on other diseases.”  
 
ART-SCID and HLH are typical of many rare diseases in that they have small patient populations, making drug development challenging and cost prohibitive. On average, it takes 10 years for a single clinical trial. 
 
HLH and ART-SCID are two examples of a class known as inborn errors of immunity or IEIs. Each IEI is very rare, but collectively there are about 500 such diseases affecting more than 112,000 patients. 
 
“We can develop CRISPR cures in a laboratory, but at the end of the day we need a way to turn those into clinical products for thousands of patients,” says IGI founder Jennifer Doudna, PhD, a UC Berkeley biochemist who won the Nobel Prize for co-developing CRISPR.  
 
Currently there are only a few hundred patients in clinical trials for CRISPR-based therapies; the IGI hopes its work will allow that number to ramp up ten-fold over the next decade. 
 
After decades of research, Puck and UCSF Pediatrics Professor Mort Cowan, MD, successfully treated 14 children with ART-SCID, known colloquially as Bubble Baby Disease, by inserting a corrected version of the Artemis gene into the children’s own bone marrow stem cells using a delivery system known as a lentivirus. A CRISPR-based version of this treatment could more precisely target where the gene copies go, avoiding possible toxicity from lentiviral interference with genes near sites of insertion in the genome.  
 
Both ART-SCID and HLH have extensive patient registries to facilitate enrollment in future clinical trials. Since both are diseases of blood-forming bone marrow stem cells that renew the immune system throughout the life span, targeting these cells can bypass challenges in delivering CRISPR molecules to tissues in other disorders.  
 
“We know how to deliver the CRISPR molecules into the cells to fix them,” Cowan said. “We also know how to reach patients, because there is an existing registry and network of expert physicians. By focusing on ART-SCID and HLH first, we aim to create a roadmap through pre-clinical and clinical development and lead the way for other indications, whether they are rare or not.” 

The IGI team includes UCSF physician-scientists Matthew Kan, MD, PhD, Puck and Cowan focusing on ART-SCID; and David Nguyen, MD, PhD, Michelle Hermiston, MD, PhD and Bryan Shy, MD, PhD, focusing on HLH. Petros Giannikopoulos, MD, director of IGI’s Clinical Laboratory, will be the center’s diagnostic and analytical lead. Donald Kohn, MD, of UCLA will be involved in translating the gene editing approaches developed at UCSF and UC Berkeley to clinical cell manufacturing in the UCLA Human Gene and Cell Therapy Facility. 
 
Cowan, Kan, Kohn and Puck are recipients of grants from the California Institute of Regenerative Medicine, CIRM, which have enabled them to reach the current stage of their work with the Beacon center. 

 

About UCSF: The University of California, San Francisco (UCSF) is exclusively focused on the health sciences and is dedicated to promoting health worldwide through advanced biomedical research, graduate-level education in the life sciences and health professions, and excellence in patient care. UCSF Health, which serves as UCSF's primary academic medical center, includes top-ranked specialty hospitals and other clinical programs, and has affiliations throughout the Bay Area. UCSF School of Medicine also has a regional campus in Fresno. Learn more at https://ucsf.edu, or see our Fact Sheet.

About the Innovative Genomics Institute: The Innovative Genomics Institute (IGI) is a joint effort between the Bay Area’s leading scientific research institutions, UC Berkeley and UC San Francisco, with affiliates at UC Davis, UCLA, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Gladstone Institutes, and other institutions. Founded by Nobel laureate Jennifer Doudna, the IGI’s mission is to bridge revolutionary genome-editing tool development to affordable and accessible solutions in human health, climate, and agriculture. We are working toward a world where genomic technology is routinely applied to treat genetic disease, enable sustainable agriculture, and help achieve a carbon-neutral economy. www.innovativegenomics.org

###

 

Follow UCSF
ucsf.edu | Facebook.com/ucsf YouTube.com/ucsf