Showing posts sorted by relevance for query LHC. Sort by date Show all posts
Showing posts sorted by relevance for query LHC. Sort by date Show all posts

Sunday, December 11, 2022

Nuclear theorists collaborate to explore 'heavy flavor' particles

Leading US researchers will develop framework for describing exotic particles' behavior at various stages in the evolution of hot nuclear matter

Grant and Award Announcement

DOE/BROOKHAVEN NATIONAL LABORATORY


Tracking Heavy Quarks 

IMAGE: COLLISIONS AT THE RELATIVISTIC HEAVY ION COLLIDER (RHIC) PRODUCE A HOT SOUP OF QUARKS AND GLUONS (CENTER)—AND ULTIMATELY THOUSANDS OF NEW PARTICLES. A NEW THEORY COLLABORATION SEEKS TO UNDERSTAND HOW HEAVY QUARKS (Q) AND ANTIQUARKS (Q-BAR) INTERACT WITH THIS QUARK-GLUON PLASMA (QGP) AND TRANSFORM INTO COMPOSITE PARTICLES THAT STRIKE THE DETECTOR. TRACKING THESE "HEAVY FLAVOR" PARTICLES CAN HELP SCIENTISTS UNRAVEL THE UNDERLYING MICROSCOPIC PROCESSES THAT DRIVE THE PROPERTIES OF THE QGP. view more 

CREDIT: BROOKHAVEN NATIONAL LABORATORY

UPTON, NY—Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory will participate in a new Topical Theory Collaboration funded by DOE’s Office of Nuclear Physics to explore the behavior of so-called “heavy flavor” particles. These particles are made of quarks of the “charm” and “bottom” varieties, which are heavier and rarer than the “up” and “down” quarks that make up the protons and neutrons of ordinary atomic nuclei. By understanding how these exotic particles form, evolve, and interact with the medium created during powerful particle collisions, scientists will gain a deeper understanding of a unique form of matter known as a quark-gluon plasma (QGP) that filled the early universe.

These experiments take place at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven Lab and the Large Hadron Collider (LHC) at Europe’s CERN laboratory. Scientists accelerate and smash together the nuclei of heavy atoms at energies high enough to set free the quarks and gluelike “gluons” that hold ordinary matter together. These collisions create a soup of quarks and gluons much like the matter that existed just after the Big Bang, some 14 billion years ago.

A powerful theory, known as quantum chromodynamics (QCD), describes very accurately how the plasma’s quarks and gluons interact. But understanding how those fundamental interactions lead to the complex characteristics of the plasma—a trillion-degree, dense medium that flows like a fluid with no resistance—remains a great challenge in modern research.

The Heavy-Flavor Theory (HEFTY) for QCD Matter Topical Theory Collaboration, which will be led by Ralf Rapp from Texas A&M University, seeks to close that gap in understanding by developing a rigorous and comprehensive theoretical framework for describing how heavy-flavor particles interact with the QGP.

“With a heavy-flavor framework in place, experiments tracking these particles can be used to precisely probe the plasma’s properties,” said Peter Petreczky, a theorist at Brookhaven Lab, who will serve as co-spokesperson for the collaboration along with Ramona Vogt from DOE’s Lawrence Livermore National Laboratory. “Our framework will also provide a foundation for using heavy-flavor particles as a probe at the future Electron-Ion Collider (EIC). Future experiments at the EIC will probe different forms of cold nuclear matter which are the precursors of the QGP in the laboratory,” Petreczky said.

In heavy ion collisions at RHIC and the LHC, heavy charm and bottom quarks are produced upon initial impact of the colliding nuclei. Their large masses cause a diffusive motion that can serve as a marker of the interactions in the QGP, including the fundamental process of quarks binding together to form composite particles called hadrons.

“The framework needs to describe these particles from their initial production when the nuclei first collide, through their subsequent diffusion through the QGP and hadroniziation,” Petreczky said. “And these descriptions need to be embedded into realistic numerical simulations that enable quantitative comparisons to experimental data.”

Swagato Mukherjee of Brookhaven Lab will be a co-principal investigator in the collaboration, responsible for lattice QCD computations. These calculations require some of the world’s most powerful supercomputers to handle the complex array of variables involved in quark-gluon interactions.

“Recently there has been significant progress in lattice QCD calculations related to heavy flavor probes of QGP,” Mukherjee said. “We are in an exciting time when the exascale computing facilities and the support provided by the topical collaboration will enable us to perform realistic calculations of the key quantities needed for theoretical interpretation of experimental results on heavy flavor probes.”

In addition to lattice QCD the collaboration will use variety of theoretical approaches, including rigorous statistical data analysis to obtain the transport properties of QGP.

“The resulting framework will help us unravel the underlying microscopic processes that drive the properties of the QGP, thereby providing unprecedented insights into the inner workings of nuclear matter based on QCD,” said Rapp of Texas A&M, the principal investigator of the project.

The HEFTY collaboration will receive $2.5 Million from the DOE Office of Science, Office of Nuclear Physics, over five years. That funding will provide partial support for six graduate students and three postdoctoral fellows at 10 institutions, as well as a senior staff position at one of the national laboratories. It will also establish a bridge junior faculty position at Kent State University.

Partnering institutions include Brookhaven National Laboratory, Duke University, Florida State University, Kent State University, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Massachusetts Institute of Technology, Texas A&M University, and Thomas Jefferson National Accelerator Facility.

Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.

Follow @BrookhavenLab on Twitter or find us on Facebook.

Related Links

Tuesday, July 07, 2020

New collection of stars, not born in our galaxy, discovered in Milky Way

Caltech researchers use deep learning and supercomputing to identify Nyx, a product of a long-ago galaxy merger
UNIVERSITY OF TEXAS AT AUSTIN, TEXAS ADVANCED COMPUTING CENTER
IMAGE
IMAGE: STILL FROM A SIMULATION OF INDIVIDUAL GALAXIES FORMING, STARTING AT A TIME WHEN THE UNIVERSE WAS JUST A FEW MILLION YEARS OLD. view more 
CREDIT: HOPKINS RESEARCH GROUP, CALTECH
Astronomers can go their whole career without finding a new object in the sky. But for Lina Necib, a postdoctoral scholar in theoretical physics at Caltech, the discovery of a cluster of stars in the Milky Way, but not born of the Milky Way, came early - with a little help from supercomputers, the Gaia space observatory, and new deep learning methods.
Writing in Nature Astronomy this week, Necib and her collaborators describe Nyx, a vast new stellar stream in the vicinity of the Sun, that may provide the first indication that a dwarf galaxy had merged with the Milky Way disk. These stellar streams are thought to be globular clusters or dwarf galaxies that have been stretched out along its orbit by tidal forces before being completely disrupted.
The discovery of Nyx took a circuitous route, but one that reflects the multifaceted way astronomy and astrophysics are studied today.
FIRE in the Cosmos
Necib studies the kinematics -- or motions -- of stars and dark matter in the Milky Way. "If there are any clumps of stars that are moving together in a particular fashion, that usually tells us that there is a reason that they're moving together."
Since 2014, researchers from Caltech, Northwestern University, UC San Diego and UC Berkeley, among other institutions, have been developing highly-detailed simulations of realistic galaxies as part of a project called FIRE (Feedback In Realistic Environments). These simulations include everything scientists know about how galaxies form and evolve. Starting from the virtual equivalent of the beginning of time, the simulations produce galaxies that look and act much like our own.
Mapping the Milky Way
Concurrent to the FIRE project, the Gaia space observatory was launched in 2013 by the European Space Agency. Its goal is to create an extraordinarily precise three-dimensional map of about one billion stars throughout the Milky Way galaxy and beyond.
"It's the largest kinematic study to date. The observatory provides the motions of one billion stars," she explained. "A subset of it, seven million stars, have 3D velocities, which means that we can know exactly where a star is and its motion. We've gone from very small datasets to doing massive analyses that we couldn't do before to understand the structure of the Milky Way."
The discovery of Nyx involved combining these two major astrophysics projects and analyzing them using deep learning methods.
Among the questions that both the simulations and the sky survey address is: How did the Milky Way become what it is today?
"Galaxies form by swallowing other galaxies," Necib said. "We've assumed that the Milky Way had a quiet merger history, and for a while it was concerning how quiet it was because our simulations show a lot of mergers. Now, with access to a lot of smaller structures, we understand it wasn't as quiet as it seemed. It's very powerful to have all these tools, data and simulations. All of them have to be used at once to disentangle this problem. We're at the beginning stages of being able to really understand the formation of the Milky way."
Applying Deep Learning to Gaia
A map of a billion stars is a mixed blessing: so much information, but nearly impossible to parse by human perception.
"Before, astronomers had to do a lot of looking and plotting, and maybe use some clustering algorithms. But that's not really possible anymore," Necib said. "We can't stare at seven million stars and figure out what they're doing. What we did in this series of projects was use the Gaia mock catalogues."
The Gaia mock catalogue, developed by Robyn Sanderson (University of Pennsylvania), essentially asked: 'If the FIRE simulations were real and observed with Gaia, what would we see?'
Necib's collaborator, Bryan Ostdiek (formerly at University of Oregon, and now at Harvard University), who had previously been involved in the Large Hadron Collider (LHC) project, had experience dealing with huge datasets using machine and deep learning. Porting those methods over to astrophysics opened the door to a new way to explore the cosmos.
"At the LHC, we have incredible simulations, but we worry that machines trained on them may learn the simulation and not real physics," Ostdiek said. "In a similar way, the FIRE galaxies provide a wonderful environment to train our models, but they are not the Milky Way. We had to learn not only what could help us identify the interesting stars in simulation, but also how to get this to generalize to our real galaxy."
The team developed a method of tracking the movements of each star in the virtual galaxies and labelling the stars as either born in the host galaxy or accreted as the products of galaxy mergers. The two types of stars have different signatures, though the differences are often subtle. These labels were used to train the deep learning model, which was then tested on other FIRE simulations.
After they built the catalogue, they applied it to the Gaia data. "We asked the neural network, 'Based on what you've learned, can you label if the stars were accreted or not?'" Necib said.
The model ranked how confident it was that a star was born outside the Milky Way on a range from 0 to 1. The team created a cutoff with a tolerance for error and began exploring the results.
This approach of applying a model trained on one dataset and applying it to a different but related one is called transfer learning and can be fraught with challenges. "We needed to make sure that we're not learning artificial things about the simulation, but really what's going on in the data," Necib said. "For that, we had to give it a little bit of help and tell it to reweigh certain known elements to give it a bit of an anchor."
They first checked to see if it could identify known features of the galaxy. These include "the Gaia sausage" -- the remains of a dwarf galaxy that merged with the Milky Way about six to ten billion years ago and that has a distinctive sausage-like orbital shape.
"It has a very specific signature," she explained. "If the neural network worked the way it's supposed to, we should see this huge structure that we already know is there."
The Gaia sausage was there, as was the stellar halo -- background stars that give the Milky Way its tell-tale shape -- and the Helmi stream, another known dwarf galaxy that merged with the Milky Way in the distant past and was discovered in 1999.
First Sighting: Nyx
The model identified another structure in the analysis: a cluster of 250 stars, rotating with the Milky Way's disk, but also going toward the center of the galaxy.
"Your first instinct is that you have a bug," Necib recounted. "And you're like, 'Oh no!' So, I didn't tell any of my collaborators for three weeks. Then I started realizing it's not a bug, it's actually real and it's new."
But what if it had already been discovered? "You start going through the literature, making sure that nobody has seen it and luckily for me, nobody had. So I got to name it, which is the most exciting thing in astrophysics. I called it Nyx, the Greek goddess of the night. This particular structure is very interesting because it would have been very difficult to see without machine learning."
The project required advanced computing at many different stages. The FIRE and updated FIRE-2 simulations are among the largest computer models of galaxies ever attempted. Each of the nine main simulations -- three separate galaxy formations, each with slightly different starting point for the sun -- took months to compute on the largest, fastest supercomputers in the world. These included Blue Waters at the National Center for Supercomputing Applications (NCSA), NASA's High-End Computing facilities, and most recently Stampede2 at the Texas Advanced Computing Center (TACC).
The researchers used clusters at the University of Oregon to train the deep learning model and to apply it to the massive Gaia dataset. They are currently using Frontera, the fastest system at any university in the world, to continue the work.
"Everything about this project is computationally very intensive and would not be able to happen without large-scale computing," Necib said.
Future Steps
Necib and her team plan to explore Nyx further using ground-based telescopes. This will provide information about the chemical makeup of the stream, and other details that will help them date Nyx's arrival into the Milky Way, and possibly provide clues on where it came from.
The next data release of Gaia in 2021 will contain additional information about 100 million stars in the catalogue, making more discoveries of accreted clusters likely.
"When the Gaia mission started, astronomers knew it was one of the largest datasets that they were going to get, with lots to be excited about," Necib said. "But we needed to evolve our techniques to adapt to the dataset. If we didn't change or update our methods, we'd be missing out on physics that are in our dataset."
The successes of the Caltech team's approach may have an even bigger impact. "We're developing computational tools that will be available for many areas of research and for non-research related things, too," she said. "This is how we push the technological frontier in general."
###

Tuesday, March 23, 2021

Cern experiment hints at new force of nature

MAGICK BY ANY OTHER NAME
IT'S A QUANTUM UNIVERSE ANYTHING CAN HAPPEN


Experts reveal ‘cautious excitement’ over unstable particles that fail to decay as standard model suggests

Ian Sample Science editor 
THE GUARDIAN
@iansample
Tue 23 Mar 2021 


Scientists at the Large Hadron Collider near Geneva have spotted an unusual signal in their data that may be the first hint of a new kind of physics.

The LHCb collaboration, one of four main teams at the LHC, analysed 10 years of data on how unstable particles called B mesons, created momentarily in the vast machine, decayed into more familiar matter such as electrons.

The mathematical framework that underpins scientists’ understanding of the subatomic world, known as the standard model of particle physics, firmly maintains that the particles should break down into products that include electrons at exactly the same rate as they do into products that include a heavier cousin of the electron, a particle called a muon.

A man rides his bicycle along the beam line of the Large Hadron Collider.
Photograph: Valentin Flauraud/AFP via Getty Images

But results released by Cern on Tuesday suggest that something unusual is happening. The B mesons are not decaying in the way the model says they should: instead of producing electrons and muons at the same rate, nature appears to favour the route that ends with electrons.

“We would expect this particle to decay into the final state containing electrons and the final state containing muons at the same rate as each other,” said Prof Chris Parkes, an experimental particle physicist at the University of Manchester and spokesperson for the LHCb collaboration. “What we have is an intriguing hint that maybe these two processes don’t happen at the same rate, but it’s not conclusive.”

In physics parlance, the result has a significance of 3.1 sigma, meaning the chance of it being a fluke is about one in 1,000. While that may sound convincing evidence, particle physicists tend not to claim a new discovery until a result reaches a significance of five sigma, where the chance of it being a statistical quirk are reduced to one in a few million.

“It’s an intriguing hint, but we have seen sigmas come and go before. It happens surprisingly frequently,” Parkes said.

The standard model of particle physics describes the particles and forces that govern the subatomic world. Constructed over the past half century, it defines how elementary particles called quarks build protons and neutrons inside atomic nuclei, and how these, usually combined with electrons, make up all known matter. The model also explains three of the four fundamental forces of nature: electromagnetism; the strong force, which holds atomic nuclei together; and the weak force which causes nuclear reactions in the sun.

But the standard model does not describe everything. It does not explain the fourth force, gravity, and perhaps more strikingly, says nothing about the 95% of the universe that physicists believe is not constructed from normal matter.


Much of the cosmos, they believe, consists of dark energy, a force that appears to be driving the expansion of the universe, and dark matter, a mysterious substance that seems to hold the cosmic web of matter in place like an invisible skeleton.

 ONCE UPON A TIME IT WAS KNOWN AS AETHER TO SCIENTISTS 

“If it turns out, with extra analysis of additional processes, that we were able to confirm this, it would be extremely exciting,” Parkes said. It would mean there is something wrong with the standard model and that we require something extra in our fundamental theory of particle physics to explain how this would happen.”

Despite the uncertainties over this particular result, Parkes said when combined with other results on B mesons, the case for something unusual happening became more convincing.

“I would say there is cautious excitement. We’re intrigued because not only is this result quite significant, it fits the pattern of some previous results from LHCb and other experiments worldwide,” he said.

Ben Allanach, a professor of theoretical physics at the University of Cambridge, agrees that taken together with other findings, the latest LHCb result is exciting. “I really think this will turn into something,” he said.

If the result turns out to be true, it could be explained by so-far hypothetical particles called Z primes or leptoquarks that bring new forces to bear on other particles.

“There could be a new quantum force that makes the B mesons break up into muons at the wrong rate. It’s sticking them together and stopping them decaying into muons at the rate we’d expect,” Allanach said. “This force could help explain the peculiar pattern of different matter particles’ masses.”

B mesons contain elementary particles called beauty quarks, also know as bottom quarks.

Scientists will collect more data from the LHC and other experiments around the world, such as Belle II in Japan, in the hope of confirming what is happening.

Thursday, March 28, 2024

 

Aston University research center to focus on using AI to improve lives



ASTON UNIVERSITY
Aston University research centre to focus on using AI to improve lives 

IMAGE: 

PROFESSOR ANIKÓ EKÁRT AND 'PEPPER' THE ROBOT

view more 

CREDIT: ASTON UNIVERSITY


•   



 New centre specifically focuses on using AI to improve society
•    Current research is designed to improve transport, health and industry
•    “There have been a lot of reports focusing on the negative use of AI...this is why the centre is so       important now.”

Aston University researchers have marked the opening of a new centre which focuses on harnessing artificial intelligence (AI) to improve people’s lives.

The Aston Centre for Artificial Intelligence Research and Application (ACAIRA) has been set up to become a West Midlands hub for the use of AI to benefit of society. 

Following its official opening, the academics leading it are looking to work with organisations and the public. Director Professor Anikó Ekárt said: “There have been a lot of reports focusing on the negative use of AI and subsequent fear of AI. This is why the centre is so important now, as we aim to achieve trustworthy, ethical and sustainable AI solutions for the future, by co-designing them with stakeholders.”

Deputy director Dr Ulysses Bernardet added: “We work with local, national and international institutions from academia, industry, and the public sector, expanding Aston University’s external reach in AI research and application. 

“ACAIRA will benefit our students enormously by training them to become the next generation of AI practitioners and researchers equipped for future challenges.”

The centre is already involved in various projects that use AI to solve some of society’s challenges.

A collaboration with Legrand Care aims to extend and improve independent living conditions for older people by using AI to analyse data collected through home sensors which detect decline in wellbeing. This allows care professionals to change and improve individuals’ support plans whenever needed. 

A project with engineering firm Lanemark aims to reduce the carbon footprint of industrial gas burners by exploring new, more sustainable fuel mixes. 

Other projects include work with asbestos consultancy Thames Laboratories which will lead to reduced costs, emissions, enhanced productivity and improved resident satisfaction in social housing repairs and a partnership with transport safety consultancy Agilysis to produce an air quality prediction tool which uses live data to improve transport planning decisions.  

The centre is part of the University’s College of Engineering and Physical Sciences and its official launch took place on the University campus on 29 February. The event included a talk by the chair of West Midlands AI and Future Tech Forum, Dr Chris Meah. He introduced the vision for AI within the West Midlands and the importance of bringing together academics, industry and the public.

Current research in sectors such as traffic management, social robotics, bioinformatics, health, and virtual humans was highlighted, followed by industry talks from companies Smart Transport Hub, Majestic, DRPG and Proximity Data Centres. 

The centre’s academics work closely with West Midlands AI and Future Tech Forum and host the regular BrumAI Meetup.


Artificial intelligence to reconstruct particle paths leading to new physics



THE HENRYK NIEWODNICZANSKI INSTITUTE OF NUCLEAR PHYSICS POLISH ACADEMY OF SCIENCES
The principle of reconstructing the tracks of secondary particles 

IMAGE: 

THE PRINCIPLE OF RECONSTRUCTING THE TRACKS OF SECONDARY PARTICLES BASED ON HITS RECORDED DURING COLLISIONS INSIDE THE MUONE DETECTOR. SUBSEQUENT TARGETS ARE MARKED IN GOLD, AND SILICON DETECTOR LAYERS ARE MARKED IN BLUE.

view more 

CREDIT: SOURCE: IFJ PAN




Cracow, 20 March 2024

Artificial intelligence to reconstruct particle paths leading to new physics

Particles colliding in accelerators produce numerous cascades of secondary particles. The electronics processing the signals avalanching in from the detectors then have a fraction of a second in which to assess whether an event is of sufficient interest to save it for later analysis. In the near future, this demanding task may be carried out using algorithms based on AI, the development of which involves scientists from the Institute of Nuclear Physics of the PAS.

Electronics has never had an easy life in nuclear physics. There is so much data coming in from the LHC, the most powerful accelerator in the world, that recording it all has never been an option. The systems that process the wave of signals coming from the detectors therefore specialise in... forgetting – they reconstruct the tracks of secondary particles in a fraction of a second and assess whether the collision just observed can be ignored or whether it is worth saving for further analysis. However, the current methods of reconstructing particle tracks will soon no longer suffice.

Research presented in Computer Science by scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, Poland, suggests that tools built using artificial intelligence could be an effective alternative to current methods for the rapid reconstruction of particle tracks. Their debut could occur in the next two to three years, probably in the MUonE experiment which supports the search for new physics.

In modern high-energy physics experiments, particles diverging from the collision point pass through successive layers of the detector, depositing a little energy in each. In practice, this means that if the detector consists of ten layers and the secondary particle passes through all of them, its path has to be reconstructed on the basis of ten points. The task is only seemingly simple.

“There is usually a magnetic field inside the detectors. Charged particles move in it along curved lines and this is also how the detector elements activated by them, which in our jargon we call hits, will be located with respect to each other,” explains Prof. Marcin Kucharczyk, (IFJ PAN) and immediately adds: “In reality, the so-called occupancy of the detector, i.e. the number of hits per detector element, may be very high, which causes many problems when trying to reconstruct the tracks of particles correctly. In particular, the reconstruction of tracks that are close to each other is quite a problem.”

Experiments designed to find new physics will collide particles at higher energies than before, meaning that more secondary particles will be created in each collision. The luminosity of the beams will also have to be higher, which in turn will increase the number of collisions per unit time. Under such conditions, classical methods of reconstructing particle tracks can no longer cope. Artificial intelligence, which excels where certain universal patterns need to be recognised quickly, can come to the rescue.

“The artificial intelligence we have designed is a deep-type neural network. It consists of an input layer made up of 20 neurons, four hidden layers of 1,000 neurons each and an output layer with eight neurons. All the neurons of each layer are connected to all the neurons of the neighbouring layer. Altogether, the network has two million configuration parameters, the values of which are set during the learning process,” describes Dr Milosz Zdybal (IFJ PAN).

The deep neural network thus prepared was trained using 40,000 simulated particle collisions, supplemented with artificially generated noise. During the testing phase, only hit information was fed into the network. As these were derived from computer simulations, the original trajectories of the responsible particles were known exactly and could be compared with the reconstructions provided by the artificial intelligence. On this basis, the artificial intelligence learned to correctly reconstruct the particle tracks.

“In our paper, we show that the deep neural network trained on a properly prepared database is able to reconstruct secondary particle tracks as accurately as classical algorithms. This is a result of great importance for the development of detection techniques. Whilst training a deep neural network is a lengthy and computationally demanding process, a trained network reacts instantly. Since it does this also with satisfactory precision, we can think optimistically about using it in the case of real collisions,” stresses Prof. Kucharczyk.

The closest experiment in which the artificial intelligence from IFJ PAN would have a chance to prove itself is MUonE (MUon ON Electron elastic scattering). This examines an interesting discrepancy between the measured values of a certain physical quantity to do with muons (particles that are about 200 times more massive equivalents of the electron) and predictions of the Standard Model (that is, the model used to describe the world of elementary particles). Measurements carried out at the American accelerator centre Fermilab show that the so-called anomalous magnetic moment of muons differs from the predictions of the Standard Model with a certainty of up to 4.2 standard deviations (referred as sigma). Meanwhile, it is accepted in physics that a significance above 5 sigma, corresponding to a certainty of 99.99995%, is a value deemed acceptable to announce a discovery.

The significance of the discrepancy indicating new physics could be significantly increased if the precision of the Standard Model's predictions could be improved. However, in order to better determine the anomalous magnetic moment of the muon with its help, it would be necessary to know a more precise value of the parameter known as the hadronic correction. Unfortunately, a mathematical calculation of this parameter is not possible. At this point, the role of the MUonE experiment becomes clear. In it, scientists intend to study the scattering of muons on electrons of atoms with low atomic number, such as carbon or beryllium. The results will allow a more precise determination of certain physical parameters that directly depend on the hadronic correction. If everything goes according to the physicists' plans, the hadronic correction determined in this way will increase the confidence in measuring the discrepancy between the theoretical and measured value of the muon's anomalous magnetic moment by up to 7 sigma – and the existence of hitherto unknown physics may become a reality.

The MUonE experiment is to start at Europe's CERN nuclear facility as early as next year, but the target phase has been planned for 2027, which is probably when the Cracow physicists will have the opportunity to see if the artificial intelligence they have created will do its job in reconstructing particle tracks. Confirmation of its effectiveness in the conditions of a real experiment could mark the beginning of a new era in particle detection techniques.

The work of the team of physicists from the IFJ PAN was funded by a grant from the Polish National Science Centre.

The Henryk Niewodniczański Institute of Nuclear Physics (IFJ PAN) is currently one of the largest research institutes of the Polish Academy of Sciences. A wide range of research carried out at IFJ PAN covers basic and applied studies, from particle physics and astrophysics, through hadron physics, high-, medium-, and low-energy nuclear physics, condensed matter physics (including materials engineering), to various applications of nuclear physics in interdisciplinary research, covering medical physics, dosimetry, radiation and environmental biology, environmental protection, and other related disciplines. The average yearly publication output of IFJ PAN includes over 600 scientific papers in high-impact international journals. Each year the Institute hosts about 20 international and national scientific conferences. One of the most important facilities of the Institute is the Cyclotron Centre Bronowice (CCB), which is an infrastructure unique in Central Europe, serving as a clinical and research centre in the field of medical and nuclear physics. In addition, IFJ PAN runs four accredited research and measurement laboratories. IFJ PAN is a member of the Marian Smoluchowski Kraków Research Consortium: “Matter-Energy-Future”, which in the years 2012-2017 enjoyed the status of the Leading National Research Centre (KNOW) in physics. In 2017, the European Commission granted the Institute the HR Excellence in Research award. As a result of the categorization of the Ministry of Education and Science, the Institute has been classified into the A+ category (the highest scientific category in Poland) in the field of physical sciences.

SCIENTIFIC PUBLICATIONS:

“Machine Learning based Event Reconstruction for the MUonE Experiment”

M. Zdybał, M. Kucharczyk, M. Wolter

Computer Science 25(1) (2024) 25-46

DOI: 10.7494/csci.2024.25.1.5690

 

LINKS:

http://www.ifj.edu.pl/

The website of the Institute of Nuclear Physics, Polish Academy of Sciences.

http://press.ifj.edu.pl/

Press releases of the Institute of Nuclear Physics, Polish Academy of Sciences.

 

IMAGES:

IFJ240320b_fot01s.jpg                                 

HR: http://press.ifj.edu.pl/news/2024/03/20/IFJ240320b_fot01.jpg

The principle of reconstructing the tracks of secondary particles based on hits recorded during collisions inside the MUonE detector. Subsequent targets are marked in gold, and silicon detector layers are marked in blue. (Source: IFJ PAN)


 

Friday, March 13, 2020


REPORTER'S NOTEBOOK
2020 Time Capsule

Carlos Barria / Reuters
Jonathan Ernst / Reuters

This afternoon, on the heels of a widely panned formal Oval Office address, Donald Trump assembled a group of scientific and corporate leaders to talk about dealing with the coronavirus. You can watch the whole thing on the White House YouTube channel.

I suspect that we’ll see one line from this conference played frequently in the months ahead. You can watch it starting at around 1:22:00, when reporter Kristen Welker of NBC asks Trump whether he takes responsibility for the lag in making test kits available.

Trump’s reply:
No.
I don’t take responsibility at all.

Narrowly parsed, and in full context, Trump was referring only to the test kits — and was continuing his (fantasized) complaint that rules left over from 2016, under the Obama administration, are the real reason the U.S. has been so slow to respond to this pandemic.

But filmable moments in politics are not always taken in full context, and at their most narrowly parsed logical reading:
They cling to guns and religion
I voted for it, before I voted against it
Basket of deplorables
Brownie, you’re doing a heck of a job
Stuff happens” (Donald Rumsfeld on the chaos of post-invasion Iraq)
It depends on what the meaning of the word ‘is’, is.”
Peace for our time.”

All of these—in full context, and most-sympathetically read—had a meaning you could understand and perhaps defend. None of that context or meaning survived, as those went from being phrases to weaponized symbols.

Will that happen to “I don’t take responsibility at all”? We will soon see.

Other stage business points:
A series of CEOs came to the microphone to describe what their companies were doing to speed testing or help out in other ways. Trump caught the first three or four of them unawares, by shaking their hands as they moved away from the podium. All seemed startled, as you can see in the video.

Then the other CEOs began to catch on, and a following group of them scuttled away from the podium before Trump could grab them for a handshake, or held their own hands clenched together, in a protective prayer-style grasp.

Finally, (at 1:06 in the video) you can see Bruce Greenstein, of the LHC group, surprise Trump with an elbow-bump rather than a hand shake. Trump himself seemed completely oblivious to the idea of social distancing. It was also notable that one speaker after another touched and moved around the same microphone, and put his her hands on the sides of the same podium.
I mentioned earlier today the uneasy and evolving position of Anthony Fauci, who has been the “voice of science” through this episode as he has during previous medical emergencies. The uneasiness lies in the tension between his decades as a respected scientist, and his current role as a prominent member of Team Trump. Can he retain his long reputation as a straight shooter? While maintaining any influence with Trump.

Make what you will of his body language through the events today. (He is at far left in the picture below.)
White House
Also today, I mentioned the inevitable-for-Trump, though inconceivable-in-other-administrations, ritual of Trump subordinates limitlessly praising the goodness and wisdom of their leader. It was striking to see all the CEOs skipping right past that formality.

But if you felt a phantom-limb twinge in the absence of these comments, all you had to do was wait for Mike Pence. You can hear him starting at around 1:07, with comments that began “This day should be an inspiration to every American” and built in earnestness from there.

There was much more from the question-and-answer session, but I don’t want to spoil the experience of discovery for anyone who has not seen it yet.

Two hundred and thirty-five days until the election.
Leah Millis / Reuters

As of today, March 13, 2020—three-plus years into the current administration, three months into public awareness of the coronavirus spread, seven-plus months until before the next election—Anthony Fauci is playing a role in which no previous Trump-era figure has survived.

One other person has been in the spot Fauci now occupies. That is, of course, James Mattis, the retired four-star Marine Corps general and former secretary of defense for Trump. Former is the key word here, and the question is whether the change in circumstances between Mattis’s time and Fauci’s—the public nature of this emergency, the greater proximity of upcoming elections, the apparent verdict from financial markets and both international and domestic leaders that Donald Trump is in deep over his head—will give Fauci the greater leverage he needs, not just to stay at work but also to steer policy away from the abyss.

Why is Anthony Fauci now, even more than James Mattis before him, in a different position from any other publicly visible associate of Trump’s?
Pre-Trump credibility, connections, and respect. Fauci has been head of the National Institute of Allergy and Infectious Diseases, at the National Institutes of Health, since Ronald Reagan’s first term, in 1984. (How can he have held the post so long? Although nothing in his look or bearing would suggest it, Fauci is older than either Bernie Sanders or Joe Biden. He recently turned 79.)

Through his long tenure at NIH, which spanned the early days of the HIV/AIDS devastation and later experience with the SARS and H1N1 epidemics, Fauci has become a very familiar “public face of science,” explaining at congressional hearings and in TV and radio interviews how Americans should think about the latest threat. He has managed to stay apart from any era’s partisan-political death struggles. He has received a raft of scientific and civic honors, from the Lasker Award for health leadership, to the Presidential Medal of Freedom, awarded by George W. Bush.

Thus, in contrast to virtually all the other figures with whom Trump has surrounded himself, Fauci is by any objective standard the best person for the job — and is universally seen as such. This distinguishes him from people Trump has favored in his own coterie, from longtime consigliere Michael Cohen to longtime ally Roger Stone to longtime personal physician Harold Bornstein; and from past and present members of his White House staff, like the departed Michael Flynn and the returned Hope Hicks and the sempiternal Jared Kushner; and fish-out-of-water Cabinet appointees, like (to pick one) the neurosurgeon Ben Carson as Secretary of Housing and Urban Development.

Put another way: Very plainly, Trump needs Fauci more than Fauci needs Trump. This is not a position Donald Trump has ever felt comfortable in— witness the denouement with Mattis.

The ability not to abase himself before Trump. The first Cabinet meeting Donald Trump held, nearly three years ago, was unlike any other conducted in U.S. history, and very much like subsequent public appearance of Trump in company with his appointees.

In that meeting, on June 12, 2017, as TV cameras were rolling, Trump went around the table and one-by-one had his appointees gush about how kind, wise, and far-sighted he was—failing only to compliment him on his humility. (Tina Nguyen described the meeting at the time in Vanity Fair.) After praising himself, Trump called on others to praise him, starting with the reliable Mike Pence. “It is the greatest privilege of my life to serve as the vice president to a president who is keeping his word to the American people,” Pence began. All the others followed his example—with the prominent exception of Mattis. He spent his “praise” time instead complimenting the men and women in uniform he led.

No public event like that Cabinet meeting had happened before in the United States, simply because no other president has been as needy for in-public adulation as Trump is. Of course most politicians and all presidents are needy; you could not run for the presidency if you had a normal temperament. (Background reading on this point, while you’re “socially isolating”: Robert Penn Warren’s All the King’s Men.) Every political leader eats up the praise in private—“Wonderful job today, Mr. President—you were really connecting!”, not to mention Veep—but all the rest of them have been savvy enough to know how tacky this looks in public. The modern exception-illustrating-the-rule might have been Lyndon Johnson, with enough of the Sun King in his makeup to enjoy having people humble themselves before him. But holding a public adulation-fest? If George W. Bush had heard, say, Karl Rove start in that way, he would likely have said, “OK, Turd Blossom, what are you angling for?” Barack Obama—or John F. Kennedy, or Jimmy Carter— would have arched an eyebrow as if to ask, “Hey, did you think you were still playing in the minors?”

But what we saw in that Cabinet meeting, we have seen again and again from those around Trump. The most humiliating recent examples come from the people in charge of the coronavirus response: Pence again; Alex Azar, head of Health and Human Services; Robert Redfield, head of the Centers for Disease Control; and Seema Verma, in charge of Medicare and Medicaid. The beginnings and endings of their public statements, and the answers to many questions, are larded with praise for Trump and his “decisive and visionary action.” (For the latest example, see Verma under questioning from Martha MacCallum of Fox News. Verma repeatedly dodges MacCallum’s direct question about whether hospitals have enough ventilators and other supplies (as Fred Barbash laid out in the Washington Post. MacCallum makes one last try—and Verma seeks refuge in saying, “And that’s why the president has taken such a bold and decisive action.” That claim made no logical sense to MacCallum or the listeners, but it reflected the inescapable logic of what is expected from members of the Court of Trump.)

There is one exception: Anthony Fauci. He has occasionally said that he agrees with aspects of the administration’s or the president’s policies, but he has avoided the ritual self-abnegation. Of course Fauci held his job long before Trump came to town, and is not part of the normal round of high-level appointments each new administration makes. (To the best of my knowledge, though, directors of NIH institutes, like Fauci, serve “at the pleasure of the president” and so could be removed. If I’m wrong on that, will update.)

But Fauci’s polite but consistent reluctance to grovel cannot have gone unnoticed by the audience-of-one for all the other appointees: Trump himself.

Daring to contradict Trump, in public. This is a step beyond anything Mattis attempted. Through the first two years of the administration, background-sourced stories and reports based on “those in a position to know the Secretary’s thinking” laid out the increasing distance between Mattis’s view of American interests and what Trump was saying and doing.

But there is no precedent, from Mattis or anyone else, for what we have seen these past few weeks from Fauci at the podium. Is the coronavirus problem just going to go away (as Trump had claimed)? No, from Fauci. It is serious, and it is going to get worse. Is the testing system “perfect” (as Trump had claimed)? No, it is not working as it should. Is the U.S. once again the greatest of all nations in its response to the threat? No, it is behind in crucial aspects, and has much to learn from others.

Fauci is saying all these things politely and respectfully. As an experienced Washington operator he knows that there is no reason to begin an answer with, “The president is wrong.” You just skip to the next sentence, “The reality is...” But his meaning—“the president is wrong”—is unmistakable.

Anthony Fauci has earned the presumption-of-credibility for his comments. Donald Trump has earned the presumption that he is lying or confused. A year ago that standoff—the realities, versus Trump-world obeisance—worked out against James Mattis. Will the balance of forces be different for Fauci? As of this writing, no one can know.

2020 Time Capsule #1: Four Ways Trump’s Oval Office Address Failed

Tom Brenner / Reuters March 12, 2020

Four years ago, when Donald Trump was on his rise—from apparent-joke candidate, to long-shot, to front-runner, to nominee, and on to electoral winner—I wrote in this space a series of “Trump Time Capsules.”

They started with #1, back in May, 2016, when a Paris-bound airliner plunged into the Mediterranean and Trump immediately declared that the cause must have been terrorism. “What just happened?” he shouted to a rally crowd before wreckage had even been found. “A plane got blown out of the sky. And if anybody thinks it wasn’t blown out of the sky, you’re 100 percent wrong, folks, OK? You’re 100 percent wrong.” (Naturally, French authorities later determined that the crash arose from a mechanical problem.)

They ended with installment #152, just before the election, at the time when James Comey’s last-minute reopening of the Hillary Clinton email case was dominating headlines. In between there were installments about Paul Manafort’s fishy-looking role, the “grab ‘em by...” moment, Trump’s comments about theMexican judge,” and the shift of one-time Trump ridiculers like Lindsey Graham and Mitch McConnell into a Vichy Republican coalition.

Through all the posts, the idea was to record in real time what people knew about Donald Trump, about the country, and about the issues and stakes in the election, before any of us knew how the contest was going to turn out. As I wrote in introducing the very first installment four years ago:

People will wonder about America in our time. It can be engrossing to look back on dramatic, high-stakes periods in which people were not yet sure where things would lead, to see how they assessed the odds before knowing the outcome. The last few months of the 1968 presidential campaign: would it be Humphrey, Nixon, or conceivably even George Wallace? Or 1964: was there a chance that Goldwater might win? The impeachment countdown for Richard Nixon, in 1974? The Bush-Gore recount watch in 2000?

The Trump campaign this year will probably join that list. The odds are still against his becoming president, but no one can be sure what the next five-plus months will bring. Thus for time-capsule purposes, and not with the idea that this would change a single voter’s mind, I kick off what I intend as a regular feature. Its purpose is to catalogue some of the things Donald Trump says and does that no real president would do.

We are again in a not yet sure moment.

- About the upcoming election.

- About the unfolding-by-the-minute consequences of the coronavirus pandemic.

- About the recent collapse of the stock markets, and the less immediately visible, but ultimately far more damaging, economic and social effects of the sudden simultaneous collapse of the travel and lodging industries, of the live-events and sports and conference and entertainment businesses, of restaurants and bars, of taxis and trains, of stores in college towns, and of the impact of all of this on the people who unload baggage from airliners or clean rooms for hotel guests or work as security guards at museums or sell jerseys at baseball games. Such roles are not as resonant as “steelworkers” or “coal miners” in political or journalistic discourse, but these jobs collectively form a very large part of the economy, they’re very hard to do over the internet or “remotely,” and they’re being eliminated at a pace not seen in at least a dozen years, and probably since the 1930s.

We don’t know.

So behind our veil of ignorance about outcomes, this is another chronicle of what we knew and heard day by day, which I’ll intend to operate, as with the original series, through the upcoming election season.

Obviously I am skipping through what would be several decades’ worth of news in normal circumstances: impeachment, the Democratic primaries, the evisceration of legal norms, and so on down a long list.

Instead, for an arbitrary starting point, let’s begin with Trump’s Oval Office address last night on the virus threat. I have experience with this rhetorical form: I wrote a number of such addresses long ago when Jimmy Carter was president, and I have studied dozens of them in the intervening years.

This latest Trump speech was uniquely incompetent and inappropriate, and it’s worth noting why, as American voters decide whether to retain him in office.

One audience that Trump himself takes seriously—the world financial system—obviously took a dim view of his statement, as markets around the world headed sharply downward practically as soon as he began to talk. Of course, their view indirectly affects everyone else.

But from a political, rhetorical, and civic perspective, what was wrong with the speech? While watching it, I was assessing the speech by two standards: What it showed about Trump and his styles of thought, and what it showed about presidents and their roles in similar moments of stress.

As for Trump himself, his public vocabulary is strikingly limited on a deployable-word-count basis: “Many people are saying,” “it’s the greatest ever,” “we have tremendous people,” “very good things are happening,” “there has never been anything like it,” and of course “sir.”

Equally striking is the consistency, or narrowness, of the messages Trump delivers. A huge proportion of his entire discourse can be boiled down to two themes:
I am so great, and am doing a better job than anyone else ever has. (Biggest crowds, best economy, most loyal supporters, etc.)
Other people are such cheaters—and it is outrageous what they are trying to get away with. (They’re sending rapists; they’re behind on their NATO payments; they’re ripping us off in trade; etc.)

I won’t go through the whole classification of his discourse into these two categories, but nearly everything he said last night could be boiled down to one or the other of those themes.

I am so great and am doing the best possible job. (“This is the most aggressive and comprehensive effort to confront a foreign virus in modern history … Our team is the best anywhere in the world … Because of the economic policies that we have put into place over the last three years, we have the greatest economy anywhere in the world, by far.”)
Other people are mistreating us and are to blame. (Repeated references to the “foreign virus,” banning entry from most foreign nationals who have recently been in Europe, etc.)

Of course, every presidential address in every era has implicitly argued, I am doing a good job. Whether the challenge they’re dealing with is the Great Depression or the 9/11 attacks, Pearl Harbor or the Cuban Missile Crisis, when describing the challenge and their intended response, all presidents are effectively saying: You can feel better about this emergency, because I have a plan.

But until Trump, other presidents have applied the “show, don’t tell” policy when it comes to their own competence. They want to show they are acting the way the country would hope, so they don’t have to say it.

Trump says it himself. He quotes other people saying it about him. And he insists on hearing about his greatness from his retinue—most recently in the fawning statements made by his own vice president and secretary of health and human services, who preface their updates about the virus with North Korean-style compliments for the leader’s far-sighted action.

Five years into Trump’s presence as a foreground political figure, many listeners are inured to the two unvarying notes in his presentations: that what is good has come from him, and what is bad has come from someone else. But the prominence of these two notes in an Oval Office address was a reminder of how much we have learned to overlook. This is not how presidents have ever talked before.

And what about the speech, just as a speech? In my view it had three problems: how it was conceived; how it was written; and how it was delivered. (Plus, a bonus fourth problem I’ll get to at the end.)

How it was conceived: An Oval Office address is by definition about a big problem. (Otherwise, why is a president imposing on our time this way?) And its purpose is to answer several explicit questions: Why did this happen? How bad is it? What are we going to do about it? It also, always, must answer a deeper, broader, and more important question: Will we be OK?

Abraham Lincoln’s First and Second Inaugural Addresses can be thought of as precursors to Oval Office addresses of the broadcast era, and as the ideal form of such speeches, answering all these questions. (Why did this happen? “In your hands, my dissatisfied fellow-countrymen, and not in mine, is the momentous issue of civil war…. You have no oath registered in heaven to destroy the Government, while I shall have the most solemn one to ‘preserve, protect, and defend’ it.” Will we be OK? “With malice toward none; with charity for all; with firmness in the right, as God gives us to see the right, let us strive on to finish the work we are in..”)

Again, that’s the ideal form, but it is one that other presidents have had in mind as the model to work toward. These addresses have been about us, the American family, not about me, the leader. But Trump has only the me note in his vocal and emotional range, except for them as the enemies. He used the word us in the speech, but it was just a word. Audiences swallow a lot of guff from politicians, entertainers, and other public figures. But over time, the public can size up its most familiar performers and recognize which words ring true to them, and which they’re just reading from a script.

And this is entirely apart from the speech’s failure to address the major elephant-in-the-room questions reporters, governors, and public health officials had been asking. Starting with, Why are we so far behind with tests?

How it was written: It was written badly.

How it was delivered: Donald Trump is very effective and entertaining as an unscripted live performer, riffing and feeding off the energy of a crowd. Why does he keep going to big rallies? Partly because the crowds adore him there, and partly because this is what he’s genuinely good at. His rallies—part greatest-hits, part “you have to be there to believe it!” surprises—are great shows. That’s how he commanded so much free airtime on cable TV through 2015 and early 2016: it was the latest must-watch reality show.

But you can’t do that in every speech. And while Trump can still slip a little bit of his rally-meister style into an hour-long State of the Union address, it’s just impossible in 10 minutes behind the Resolute desk. And thus he seemed robotic, even narcotized. Presumably he had seen the text before he encountered it on the TelePrompter—in normal circumstances, a president would have done practice run-throughs many times before the cameras came on. But to judge from his delivery, he was trying to parse his way through sentences he had never seen before. If this seems harsh, compare George W. Bush’s Oval Office address after the 9/11 attacks, or Ronald Reagan’s in 1986 on defense spending and arms-limitation talks.

Bonus: Within an hour of Trump’s speech, other parts of the government were issuing “clarifications” about points he had misstated in his speech. No, not all travel from Europe was suspended. No, the European transit ban did not apply to cargo. No, Americans coming back didn’t need to be screened before reentry. And no, on other points.

Had the need for immediate fact-checking arisen, with any previous Oval Office address? Not that I am aware of. Whatever political party holds the White House and whatever policies these speeches seek to advance, such addresses usually reflect the greatest level of attention to detail that a president’s team can apply. Unfortunately, it probably did so in this case, too.

Twelve hours after Trump’s speech, Joe Biden gave an address that was “presidential,” by the standards listed above. It expressed concern for those suffering in medical, financial, or emotional ways. It laid out what was known and unknown about the challenge. Implicitly, it argued: We will be OK.

What the contrast between the speeches means, politically or in terms of public health, we don’t know at this moment. As of this installment, we know that Donald Trump faced a familiar test of presidential mettle, and badly failed.