Showing posts sorted by relevance for query QUANTUM UNIVERSE. Sort by date Show all posts
Showing posts sorted by relevance for query QUANTUM UNIVERSE. Sort by date Show all posts

Saturday, August 26, 2023

  • Could the Universe be a giant 

quantum computer?

Network Security, Hand about to touch a computer screen with data streaming through.

Physics might all be explained as the manipulation of bits of information.Credit: Getty

The death of US computer scientist and physicist Edward Fredkin this June went largely unnoticed, except for a belated obituary in the New York Times. Yet despite never quite becoming the household name that some of his contemporaries did, Fredkin had an outsized influence on both of the disciplines that he straddled.

Many still baulk at his central contention: that the laws of physics, and indeed those of the Universe itself, are essentially the result of a computer algorithm. But the ‘digital physics’ that Fredkin championed has gone from being beyond the pale to almost mainstream. “At the time it was considered a completely crazy idea that computation science could teach you anything about physics,” says Norman Margolus, a Canadian computer scientist who was a long-time collaborator of Fredkin’s and his sole physics PhD student. “The world has evolved from then, it’s all very respectable now.”

A dropout from the California Institute of Technology (Caltech) in Pasadena after his freshman year, Fredkin joined the US Air Force in 1953, becoming a fighter pilot and eventually the instructor for the elite corps of tight-formation jet pilots. The Air Force set him on to computer science, sending him to the Massachusetts Institute of Technology (MIT) Lincoln Laboratory in Lexington in 1956, to work on using computers to process radar information to guide pilots. Leaving the Air Force in 1958, Fredkin joined the pioneering computing company Bolt Beranek & Newman, based in Cambridge, Massachusetts — now part of Raytheon — where, among other projects, he wrote an early assembler language and participated in artificial-intelligence (AI) research. After founding his own company, Information International, specializing in imaging hardware and software, he came back to MIT in 1968 as a full professor, despite not even having an undergraduate degree.

Fredkin ended up directing Project MAC, a research institute that evolved into MIT’s Laboratory for Computing Science. The position was just one of a wide portfolio. “He did a lot of things in the real world,” says Margolus, now an independent researcher affiliated with MIT. These included running his company, designing a reverse-osmosis system for a desalination company and managing New England Television, the ABC affiliate in Boston, Massachusetts. Contractually limited to one day a week of outside activities, Fredkin was sometimes not seen for weeks at a time, says Margolus.

Forward thinking

In the late 1960s, AI was still a mostly theoretical concept, yet Fredkin was early to grasp the policy challenges that machines capable of learning and autonomous decision-making pose, including for national security. He championed international collaboration on AI research, recognizing that early consensus on how the technology should be used would prevent problems down the line. However, attempts to convene an international meeting of top thinkers in the field never quite materialized — a failure that resonates to this day.

In 1974, Fredkin left MIT and spent a year as a distinguished scholar at Caltech, where he befriended the physicists Richard Feynman and Stephen Hawking. He then accepted a tenured faculty position at Carnegie Mellon University in Pittsburgh, Pennsylvania, and later a second position at Boston University. It was from then that he started work on reversible computing.

Ed Fredkin working on a programmed data processor.

Edward Fredkin saw few limits to what computing might explain.Credit: School of Computer Science/Carnegie Mellon University

At the time, reversible computing was widely considered impossible. A conventional digital computer is assembled from an array of logic gates — ANDs, ORs, XORs and so on — in which, generally, two inputs become one output. The input information is erased, producing heat, and the process cannot be reversed. With Margolus and a young Italian electrical engineer, Tommaso Toffoli, Fredkin showed that certain gates with three inputs and three outputs — what became known as Fredkin and Toffoli gates — could be arranged such that all the intermediate steps of any possible computation could be preserved, allowing the process to be reversed on completion. As they set out in a seminal 1982 paper, a computer built with those gates might, theoretically at least, produce no waste heat and thus consume no energy1.

This seemed initially no more than a curiosity. Fredkin felt that the concept might help in the development of more efficient computers with less wasted heat, but there was no practical way to realize the idea fully using classical computers. In 1981, however, history took a new turn, when Fredkin and Toffoli organized the Physics of Computation Symposium at MIT. Feynman was among the luminaries present. In a now famous contribution, he suggested that, rather than trying to simulate quantum phenomena with conventional digital computers, some physical systems that exhibit quantum behaviour might be better tools.

This talk is widely seen as ushering in the age of quantum computers, which harness the full power of quantum mechanics to solve certain problems — such as the quantum-simulation problem that Feynman was addressing — much faster than any classical computer can. Four decades on, small quantum computers are now in development. The electronics, lasers and cooling systems needed to make them work consume a lot of power, but the quantum logical operations themselves are pretty much lossless.

Digital physics

Reversible computation “was an essential precondition really, for being able to conceive of quantum computers”, says Seth Lloyd, a mechanical engineer at MIT who in 1993 developed what is considered the first realizable concept for a quantum computer2. Although the IBM physicist Charles Bennett had also produced models of a reversible computation, Lloyd adds, it was the zero-dissipation versions described by Fredkin, Toffoli and Margolus that ended up becoming the models on which quantum computation were built.

In their 1982 paper, Fredkin and Toffoli had begun developing their work on reversible computation in a rather different direction. It started with a seemingly frivolous analogy: a billiard table. They showed how mathematical computations could be represented by fully reversible billiard-ball interactions, assuming a frictionless table and balls interacting without friction.

This physical manifestation of the reversible concept grew from Toffoli’s idea that computational concepts could be a better way to encapsulate physics than the differential equations conventionally used to describe motion and change. Fredkin took things even further, concluding that the whole Universe could actually be seen as a kind of computer. In his view, it was a ‘cellular automaton’: a collection of computational bits, or cells, that can flip states according to a defined set of rules determined by the states of the cells around them. Over time, these simple rules can give rise to all the complexities of the cosmos — even life.

He wasn’t the first to play with such ideas. Konrad Zuse — a German civil engineer who, before the Second World War, had developed one of the first programmable computers — suggested in his 1969 book Calculating Space that the Universe could be viewed as a classical digital cellular automaton. Fredkin and his associates developed the concept with intense focus, spending years searching for examples of how simple computational rules could generate all the phenomena associated with subatomic particles and forces3.

Not everyone was impressed. Margolus recounts that the renowned physicist Philip Morrison, then also on the faculty at MIT, told Fredkin’s students that Fredkin was a computer scientist, so he thought that the world was a big computer, but if he had been a cheese merchant, he would think the world was a big cheese. When the British computer scientist Stephen Wolfram proposed similar ideas in his 2002 book A New Kind of Science, Fredkin reacted by saying “Wolfram is the first significant person to believe in this stuff. I’ve been very lonely.”

In truth, however, Wolfram was not alone in exploring the ideas. Whereas Fredkin himself initially used the phrase ‘digital physics’, and later ‘digital philosophy’, modern variations on the theme have used terms such as ‘pancomputationalism’ and ‘digitalism’. They have been espoused by researchers including Dutch physics Nobel laureate Gerard ‘t Hooft, and US physicist John Wheeler, whose famous “it from bit” saying is a pithy expression of the hypothesis.

Into the quantum realm

Some, including Margolus, have continued to develop the classical version of the theory. Others have concluded that a classical computational model could not be responsible for the complexities of the Universe that we observe. According to Lloyd, Fredkin’s original digital-universe theory has “very serious impediments towards a classical digital universe being able to comprehend quantum mechanical phenomena”. But swap the classical computational rules of Fredkin’s digital physics for quantum rules, and a lot of those problems melt away. You can capture intrinsic features of a quantum Universe such as entanglement between two quantum states separated in space in a way that a theory built on classical ideas can’t.

Lloyd espoused this idea in a series of papers starting in the 1990s, as well as in a 2006 book Programming the Universe. It culminated in a comprehensive account of how rules of quantum computation might account for the known laws of physics — elementary particle theory, the standard model of particle physics and perhaps even the holy grail of fundamental physics: a quantum theory of gravity4.

Such proposals are very distinct from the more recent idea that we live in a computer simulation, advanced by the Swedish philosopher Nick Bostrom at the University of Oxford, UK, among others5. Whereas the digital Universe posits that the basic initial conditions and rules of the computational universe arose naturally, much as particles and forces of traditional physics arose naturally in the Big Bang and its aftermath, the simulation hypothesis posits that the Universe was all deliberately constructed by some highly advanced intelligent alien programmers, perhaps as some kind of grand experiment, or even as a kind of game — an implausibly involved effort, in Lloyd’s view.

The basic idea of a digital Universe might just be testable. For the cosmos to have been produced by a system of data bits at the tiny Planck scale — a scale at which present theories of physics are expected to break down — space and time must be made up of discrete, quantized entities. The effect of such a granular space-time might show up in tiny differences, for example, in how long it takes light of various frequencies to propagate across billions of light years. Really pinning down the idea, however, would probably require a quantum theory of gravity that establishes the relationship between the effects of Einstein’s general theory of relativity at the macro scale and quantum effects on the micro scale. This has so far eluded theorists. Here, the digital universe might just help itself out. Favoured routes towards quantum theories of gravitation are gradually starting to look more computational in nature, says Lloyd — for example the holographic principle introduced by ‘t Hooft, which holds that our world is a projection of a lower-dimensional reality. “It seems hopeful that these quantum digital universe ideas might be able to shed some light on some of these mysteries,” says Lloyd.

That would be just the latest twist in an unconventional story. Fredkin himself thought that his lack of a typical education in physics was, in part, what enabled him to arrive at his distinctive views on the subject. Lloyd tends to agree. “I think if he had had a more conventional education, if he’d come up through the ranks and had taken the standard physics courses and so on, maybe he would have done less interesting work.”

Nature 620, 943-945 (2023)

doi: https://doi.org/10.1038/d41586-023-02646-x

References

  1. Fredkin, E. & Toffoli, T. Int. J. Theor. Phys. 21, 219–253 (1982).

    Article Google Scholar 

  2. Lloyd, S. Science 261, 1569–1571 (1993).

    Article PubMed Google Scholar 

  3. Fredkin, E. Phys. D: Nonlinear Phenom45, 254–270 (1990).

    Article Google Scholar 

  4. Lloyd, S. Preprint at https://arxiv.org/abs/1312.4455 (2013).

  5. Bostrom, N. Philos. Q. 53, 243–255 (2003).

    Article Google Scholar 

Download references

Monday, September 06, 2021

“Powerful Hints” –Quantum Beginning of Spacetime (Weekend Feature)

Big Bang Image

 

“We didn’t have birds in the very early universe; we have birds later on. We didn’t have time in the early universe, but we have time later on,” said Stephen Hawking’s colleague, physicist James Hartle, at the University of California, Santa Barbara about what came before the Big Bang.

“Asking what came before the Big Bang,” said Stephen Hawking, “would be like asking what lies south of the South Pole. There ought to be something very special about the boundary conditions of the universe, and what can be more special than the condition that there is no boundary?” observed Hawking in 1981, at a gathering of many of the world’s leading cosmologists at the Vatican’s Pontifical Academy of Sciences.

Beginning of Our Universe – One of the Big Open Questions

The beginning of our universe – if there is one – is one of the big open questions in theoretical physics. The Big Bang is one of science’s great mysteries, and it seems the plot has thickened thanks to new research that refutes prevailing theories about the birth of the universe. A classical description of the big bang implies a singularity: a point of infinite smallness, at which Einstein’s theory of gravity – general relativity – breaks down.

“We know a lot about how our universe evolved from a second or so after the Big Bang onward. Looking back even further, the Large Hadron Collider has taught us a great deal about what our universe was like when it was as young as a trillionth of a second old,” wrote cosmologist Dan Hooper at the University of Chicago in an email to The Daily Galaxy. “But when it comes to anything prior to that,” Hooper notes, “we are blindly extrapolating. We have some fairly good reasons to think that our universe underwent a period of hyperfast expansion — cosmic inflation — when it was very young. What happened before that is anyone’s guess.”

No-Boundary Proposal 

To tackle this problem, two proposals were put forward in the 1980s: the “no-boundary proposal” by Stephen Hawking and James Hartle, and Alexander Vilenkin’s theory known as “tunneling from nothing.” Each proposal attempted to describe a smoother beginning to spacetime, using quantum theory. Rather than the infinitely pointy needle of the classical big bang, the proposals described something closer to the rounded tip of a well-used pencil – curved, without an edge or tip.

The “no-boundary proposal,” which Hawking and Hartle fully formulated in a 1983 paper, reports Mike Zeng in Quanta, envisions the cosmos having the shape of a shuttlecock. Just as a shuttlecock has a diameter of zero at its bottommost point and gradually widens on the way up, the universe, according to the no-boundary proposal, smoothly expands from a point of zero size.

“Asking what came before the Big Bang is meaningless, according to the no-boundary proposal, because there is no notion of time available to refer to,” Hawking said in another lecture at the Pontifical Academy in 2016, a year and a half before his death.

“I don’t believe in a wavefunction of the universe, nor in full determinism,” wrote Swiss physicist Nicolas Gisin, author of Mathematical languages shape our understanding of time in physics in an email to The Daily Galaxy referring to Hartle and Hawking’s  formula, the so-called “wave function of the universe” that encompasses the entire past, present and future at once — making debatable the seeds of creation, a creator, or any phase transition from a time before. “Time passes, we all know that,” Gisin continued. “Indeterminism implies the creation of new information, though meaningless information. Such creation of new information doesn’t require any God nor intelligent design.”

While Hawking’s no-boundary view has spawned much research, new mathematical work suggests such a smooth beginning could not have given birth to the ordered universe we see today.

But two years ago, a paper by Neil Turok and Job Feldbrugge of the Perimeter Institute (Canada), and Jean-Luc Lehners of the Max Planck Institute for Gravitational Physics in Germany called the Hartle-Hawking proposal into question, pointing out mathematical inconsistencies in the Hawking “no boundary” and “tunneling” proposals.

The proposal is only viable if a universe that curves out of a dimensionless point in the way Hartle and Hawking imagined naturally grows into a universe like ours, writes Zeng. Hawking and Hartle argued that indeed it would — that universes with no boundaries will tend to be huge, breathtakingly smooth, impressively flat, and expanding, just like the actual cosmos. “The trouble with Stephen and Jim’s approach is it was ambiguous,” Turok said — “deeply ambiguous.”

Hartle and Hawking’s proposal presents a radical new vision of time, reports Zeng: “Each moment in the universe becomes a cross-section of a shuttlecock shaped cosmos; while we perceive the universe as expanding and evolving from one moment to the next, time really consists of correlations between the universe’s size in each cross-section and other properties — particularly its entropy, or disorder. Entropy increases from the cork to the feathers, aiming an emergent arrow of time. Near the shuttlecock’s rounded-off bottom, though, the correlations are less reliable; time ceases to exist and is replaced by pure space.”

The no-boundary proposal has fascinated and inspired physicists for nearly four decades. “It’s a stunningly beautiful and provocative idea,” said Neil Turok, a cosmologist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a former collaborator of Hawking. The proposal represented a first guess at the quantum description of the cosmos — the wave function of the universe. Soon an entire field, quantum cosmology, sprang up as researchers devised alternative ideas about how the universe could have come from nothing.

 

Beautiful Proposals That Don’t Hold Up

Turok, Director and Niels Bohr Chair at the Perimeter Institute for Theoretical Physics in Ontario, says the previous models were “beautiful proposals seeking to describe a complete picture of the origin of spacetime,” but they don’t hold up to this new mathematical assessment. “Unfortunately, at the time those models were proposed, there was no adequately precise formulation of quantum gravity available to determine whether these proposals were mathematically meaningful.”

No Smooth Beginning for Spacetime

The new research, outlined in a paper called “No smooth beginning for spacetime,” demonstrates that a universe emerging smoothly from nothing would be “wild and fluctuating,” strongly contradicting observations, which show the universe to be extremely uniform across space.

“Hence the no-boundary proposal does not imply a large universe like the one we live in, but rather tiny curved universes that would collapse immediately,” said Lehners, a former Perimeter postdoc who leads the theoretical cosmology group at the Albert Einstein Institute.

Turok, Lehners, and Feldbrugge reached this result by revisiting the foundations of the field. They found a new way to use powerful mathematics developed over the past century to tackle one of physics’ most basic problems: how to connect quantum physics to gravity. The work builds on previous research Turok conducted with Steffen Gielen, a postdoc at the Canadian Institute for Theoretical Astrophysics and at Perimeter, in which they replaced the concept of the “classical big bang” with a “quantum big bounce.”

Turok, Lehners, and Feldbrugge are now trying to determine what mechanism could have kept large quantum fluctuations in check while allowing our large universe to unfold.

Rethink the Most Elementary Models of Quantum Gravity

The new research implies that “we either should look for another picture to understand the very early universe, or that we have to rethink the most elementary models of quantum gravity,” said Feldbrugge. “Uncovering this problem gives us a powerful hint. It is leading us closer to a new picture of the big bang,” concluded Turok. 

“People place huge faith in Stephen’s intuition,” Turok told Zeng. “For good reason — I mean, he probably had the best intuition of anyone on these topics. But he wasn’t always right.”

Read: “Physicists Debate Hawking’s Idea That the Universe Had No Beginning”

The Daily Galaxy, Maxwell Moe, astrophysicist, NASA Einstein Fellow, University of Arizona via The Perimeter Institute and Quanta

Monday, March 11, 2024

Why scientists think the Multiverse isn’t just fiction

The Multiverse fuels some of the 21st century's best fiction stories. But its supporting pillars are on extremely stable scientific footing.


How likely or unlikely was our Universe to produce a world like Earth? And how plausible would those odds be if the fundamental constants or laws governing our Universe were different? These questions may be scientifically answerable within the theory of the Multiverse, which predicts a large number of disconnected regions where a Big Bang occurs, separated by continuously inflating space.
Credit: Stu Gray / Alamy

KEY TAKEAWAYS

One of the most successful theories of 20th century science is cosmic inflation, which preceded and set up the hot Big Bang, pushing back the origin of our Universe earlier than ever.

We also know how quantum fields generally work, and if we assume inflation is an inherently quantum field, then there will always be more "still-inflating" space out there beyond the edge of our Universe.


Whenever and wherever inflation ends, you get a hot Big Bang: an infinite number of them as time goes on. If inflation and quantum field theory are both correct, a Multiverse is absolutely necessary.

Ethan Siegel

STARTS WITH A BANG — MARCH 6, 2024


Everywhere we look in the Universe, we see many examples of objects that are similar, but each one is unique. Of all the galaxies, stars, and planets we know of, no two are identical, but rather each one has its own unique history, properties, and composition. Yet it’s a compelling idea that, given enough Universe to work with, eventually the particles within it would have organized themselves in such a way that the same possibility — no matter how unlikely — occurs multiple different times. Perhaps, given the idea of an infinite Universe, there may even be an infinite number of copies of every single system we can imagine, including planet Earth, complete with each and every one of us living on it.

It’s this idea, that there might be an infinite number of copies of each one of us out there, somewhere, that gave rise to our modern notion of the Multiverse. Perhaps there are different versions of us out there, where one tiny decision, outcome, or even a quantum measurement led to a vastly different result down the road. While many have derided the Multiverse as a fundamentally unscientific idea — as, after all, there’s no way to see, test, or access information about any portion of the cosmos beyond our limited observable Universe — the fact is that the Multiverse’s very existence is rooted in science itself. In fact, if just two things are true: that cosmic inflation, which preceded and set up the Big Bang, occurred as we think it did, and that inflation, like all other fields in the Universe, is inherently a quantum field in nature, obeying all the quantum rules that other quantum theories obey,

then a Multiverse comes along as an inevitable consequence of those ideas. Here’s why physicists, despite the objections of a few, overwhelmingly claim that a multiverse must exist
.
The ‘raisin bread’ model of the expanding Universe, where relative distances increase as the space (dough) expands. The farther away any two raisins are from one another, the greater the observed redshift will be by the time the light is received. The redshift-distance relation predicted by the expanding Universe is borne out in observations and has been consistent with what’s been known since the 1920s.
Credit: NASA/WMAP Science Team

The story begins back with the discovery of the expanding Universe. Back in the 1920s, the evidence became overwhelming that not only were the copious spirals and ellipticals in the sky actually entire galaxies unto themselves, but that the farther away such a galaxy was determined to be, the greater the amount its light was shifted to systematically longer wavelengths. While a variety of interpretations were initially suggested, they all fell away with more abundant evidence until only one remained: the Universe itself was undergoing cosmological expansion, like a loaf of leavening raisin bread, where bound objects like galaxies (e.g., raisins) were embedded in an expanding Universe (e.g., the dough).


If the Universe was expanding today, and the radiation within it was being shifted toward longer wavelengths and lower energies, then that means that in the past, the Universe must have been smaller, denser, more uniform, and hotter. As long as any amount of matter and radiation are a part of this expanding Universe, the idea of the Big Bang yields three explicit and generic predictions:a large-scale cosmic web whose galaxies grow, evolve, and cluster more richly over time,
a low-energy background of blackbody radiation, left over from when neutral atoms first formed in the hot, early Universe,
and a specific set of ratios for the lightest elements — hydrogen, helium, lithium, and their various isotopes — that exist even in regions that have never yet formed stars at all.


This snippet from a structure-formation simulation, with the expansion of the Universe scaled out, represents billions of years of gravitational growth in a dark matter-rich Universe. Over time, overdense clumps of matter grow richer and more massive, growing into galaxies, groups, and clusters of galaxies, while the less dense regions than average preferentially give up their matter to the denser surrounding areas.
Credit: Ralf Kaehler and Tom Abel (KIPAC)/Oliver Hahn

All three of these predictions have been observationally borne out, and that’s why the Big Bang reigns supreme as our leading theory of the origin of our Universe, while all of its other competitors have fallen by the wayside. However, the Big Bang only describes what our Universe was like in its very early stages; it doesn’t explain why the Universe possessed the specific properties we’ve observed. In physics, if you know the initial conditions of your system and what the rules that it obeys are, you can predict extremely accurately — to the limits of your computational power and the uncertainty inherent in your system — how it will evolve arbitrarily far into the future.

Therefore, we can ask the important question: what initial conditions did the Big Bang need to have at its beginning in order to give us the Universe we observe now? The answers are a bit surprising, but what we find is that:there had to be a maximum temperature that’s significantly (about a factor of ~1000, at least) lower than the Planck scale, which is where the known laws of physics break down,
the Universe had to have been born with density fluctuations of approximately the same magnitude on all scales (with slightly, by a few percent, smaller magnitude fluctuations on small cosmic scales than large ones),
the expansion rate and the total matter-and-energy density must have balanced almost perfectly: to at least ~30 significant digits at the moment the hot Big Bang began,
the same initial conditions — same temperature, density, and spectrum of fluctuations — must have existed at all locations, even between two locations where a signal at the speed of light could not have traversed the distance between them in the time elapsed since the Big Bang,and the total entropy of the Universe must have been much, much lower than it is today, by a factor of many trillions.


If these three different regions of space never had time to thermalize, share information, or transmit signals to one another, then why are they all the same temperature? This is one of the problems with the initial conditions of the Big Bang; how could these regions all obtain the same temperature unless they started off that way, somehow?
Credit: E. Siegel/Beyond the Galaxy

Whenever we come up against a question of initial conditions — basically, why did our system start off the way it must have begun — we only have two options available to us. We can appeal to the unknowable, saying that it is this way because it’s the only way it could’ve been (i.e., the Lady Gaga explanation, saying it was simply “born this way”), and we can’t know anything further. However, there’s a scientific approach we can try as well: we can attempt to find a mechanism for setting up and creating the conditions that we know we needed to have. That second pathway is what physicists call “appealing to dynamics,” where the mechanism we devise must do three important things.It has to reproduce every success that the model it’s trying to supersede — the hot Big Bang in this instance — produces. Those earlier cornerstones must all arise from any mechanism we propose.
It has to explain the key observational fact that the Big Bang cannot: the initial conditions the Universe started off with. These problems, the ones that come unexplained within the Big Bang alone, must be explained by whatever novel idea comes along.
And it has to make new predictions that differ from the original (Big Bang) theory’s predictions, where those predictions must lead to a consequence that is in some way observable, testable, and/or measurable.

The only idea we’ve had that met all three of these criteria was the theory of cosmic inflation, which has now achieved unprecedented successes on all three fronts.


Exponential expansion, which takes place during inflation, is so powerful because it is relentless. With every ~10^-35 seconds (or so) that passes, the volume of any particular region of space doubles in each direction, causing any particles or radiation to dilute and causing any curvature to quickly become indistinguishable from flat. After only a few hundred doubling times, or ~10^-32 seconds, a fluctuation that was initially smaller than the Planck scale would now be stretched to be larger than the presently observable Universe.
Credit: E. Siegel (L); Ned Wright’s Cosmology Tutorial (R)

What inflation basically says is that the Universe, before it was hot, dense, and filled with matter-and-radiation everywhere, was in a state where it was dominated by a very large amount of energy that was inherent to space itself: some sort of field or vacuum energy. Only, unlike today’s dark energy, which has a very small energy density (the equivalent of about one proton per cubic meter of space), the energy density during inflation was tremendous: some ~1025 times greater than the dark energy density is today. Since it’s the energy density, according to Einstein’s general relativity, that determines the expansion rate, that means that during inflation, not only was the expansion rate incredibly large, but it was relentless: as space continues to expand, the expansion rate remains enormous.

This is profoundly different behavior than the Universe we’re familiar with today. In an expanding Universe with matter and radiation, the volume increases while the number of particles stays the same, and hence the density drops. Since the energy density is related to the expansion rate, the expansion rate of the Universe slows down over time.

But if the energy density is in a form that’s intrinsic to space itself, then the energy density remains constant with time, and so too will the expansion rate. The result is what we know as exponential expansion, where after a very small period of time, the Universe doubles in size, and after that time passes again, it doubles again, and so on. In very short order — a tiny fraction of a second — a region that was initially smaller than the smallest subatomic particle can get stretched to be larger than the entire visible Universe is today.



In the top panel, our modern Universe has the same properties (including temperature) everywhere because they originated from a region possessing the same properties. In the middle panel, the space that could have had any arbitrary curvature is inflated to the point where we cannot observe any curvature today, solving the flatness problem. And in the bottom panel, pre-existing high-energy relics are inflated away, providing a solution to the high-energy relic problem. This is how inflation solves the three great puzzles that the Big Bang cannot account for on its own.
Credit: E. Siegel/Beyond the Galaxy

During inflation, the Universe — irrespective of what properties it had at the start of inflation — gets stretched to enormous scales. This accomplishes a tremendous number of things in the process, among them:stretching the observable Universe, irrespective of what its initial curvature was, to be indistinguishable from flat,
taking whatever initial conditions existed in the region that began inflating, and stretching them so that they’re now uniform across the entire visible Universe,
taking whatever quanta were present within that region, prior to inflation, and rapidly driving them away from one another to arbitrarily low densities,
creating minuscule quantum fluctuations and stretching them across the Universe as well, so that they’re almost the same on all distance scales, but with slightly smaller-magnitudes on smaller scales (when inflation is about to end),
converting all that “inflationary” field energy into matter-and-radiation, but only allowing that matter-and-radiation to reach a maximum temperature that’s still well below the Planck scale (but comparable to the inflationary energy scale),
and creating a spectrum of density and temperature fluctuations that exist on scales larger than the cosmic horizon, and that are adiabatic (of constant entropy) and not isothermal (of constant temperature) everywhere.

This, at last, does all three of the things we require for a new theory to be considered for superseding an older one. Inflation reproduces the successes of the non-inflationary hot Big Bang, provides a mechanism for explaining the Big Bang’s initial conditions, and makes a slew of novel predictions that differ from those with a non-inflationary beginning. Beginning in the 1990s and through the present day, the inflationary scenario’s predictions agree with observations, distinct from the non-inflationary hot Big Bang

.
The quantum fluctuations inherent to space, stretched across the Universe during cosmic inflation, gave rise to the density fluctuations imprinted in the cosmic microwave background, which in turn gave rise to the stars, galaxies, and other large-scale structures in the Universe today. This is the best picture we have of how the entire Universe behaves, where inflation precedes and sets up the Big Bang. Unfortunately, we can only access the information contained inside our cosmic horizon, which is all part of the same fraction of one region where inflation ended some 13.8 billion years ago.
Credit: E. Siegel; ESA/Planck and the DOE/NASA/NSF Interagency Task Force on CMB research

Based on the properties that we observe our Universe to possess today, there’s a minimum amount of inflation that had to have occurred in the past in order to reproduce what we see. That further implies that there are certain conditions that inflation has to satisfy in order to be successful: the conditions that give rise to those predictions and post-dictions we just mentioned. Perhaps the simplest, most easy-to-understand way to model inflation is to treat it as a hill, where as long as you stay on top of the hill, you inflate, but as soon as you roll down into the valley below, inflation comes to an end and transfers its energy into matter and radiation.


If you do this, you’ll find that there are certain shapes your hill can have, or what physicists call “potentials,” that succeed on these fronts, while others simply don’t. The key to getting the amount of inflation that you need has everything to do with the top of the hill: it needs to be flat enough in shape over a large enough region. In simple terms, if you think of the inflationary field as a ball atop that hill, it needs to roll slowly for the majority of inflation’s duration, only picking up speed and rolling rapidly when it enters the valley, which is what brings inflation to an end. We, as scientists, have quantified how slowly inflation needs to roll, which allows us to learn something about the required shape of this potential. As long as the top is sufficiently flat, inflation can work as a viable solution to the beginning of our Universe.



When cosmic inflation occurs, the energy inherent in space is large, as it is at the top of this hill. As the ball rolls down into the valley, that energy converts into particles. This provides a mechanism for not only setting up the hot Big Bang, but for both solving the problems associated with it and making new predictions as well.
Credit: E. Siegel/Beyond the Galaxy

So, where does the idea of a Multiverse come into play? It has to do with the one way we can’t treat the ball-and-hill analogy too seriously: the fact that this is a purely classical view of things. The Universe, at least as we understand it, isn’t purely classical, but rather is quantum in nature. And that implies that inflation, like all of the fields we know of, ought to be a quantum field, too, as far as its very nature is concerned. The quantum nature of a field teaches us that many of its properties cannot be exactly determined, but rather will possess a probability distribution to them. And, just as with all time-dependent quantum systems, the greater the amount of time that passes, the greater the amount that the probability distribution will spread out.

In other words, inflation isn’t about rolling a point-like ball down a hill. Instead, what’s actually rolling down the hill is a quantum probability wavefunction, which is able to take on a variety of allowed values.

But as the ball rolls along the hill, the Universe is undergoing cosmic inflation, which means it’s expanding exponentially in all three dimensions. If we were to take a 1-by-1-by-1 cube and call that “our Universe,” then we could watch that cube expand during inflation. If it takes some tiny amount of time for the size of that cube to double, then it becomes a 2-by-2-by-2 cube, which requires 8 of the original cubes to fill. Allow that same amount of time to elapse, and it becomes a 4-by-4-by-4 cube, needing 64 original cubes to fill. Let that time elapse again, and it’s an 8-by-8-by-8 cube, with a volume of 512. After only about ~100 “doubling times,” we’ll have a Universe with approximately ~1090 original cubes in it, or a Universe that’s expanded in volume by that same factor: ~1090
.
If inflation is a quantum field, then the field value spreads out over time, with different regions of space taking different realizations of the field value. In many regions, the field value will wind up in the bottom of the valley, ending inflation, but in many more, inflation will continue so long as the ball remains on the flat part of the hill, where it can remain arbitrarily far into the future.
Credit: E. Siegel/Beyond the Galaxy

Here’s where the problem arises. If inflation is a quantum field, and quantum fields spread out over time, then what happens when the “quantum ball” atop the hill is rolling slowly, along the flat part of the hill?

The answer is that the part of the wavefunction that spreads closer to the valley-end of the hill is more likely to roll into the valley itself. In those regions, inflation is very likely to swiftly come to an end, where that field energy will then get converted to matter-and-radiation, and something that we know as a hot Big Bang will ensue. This region might be irregularly shaped at the boundaries, but some region that was just like it seems to describe the portion of the observable Universe that we can see and access. So long as enough inflation occurred to reproduce the observational successes we see in our Universe, this appears to be a good description of our own cosmic history.

But what about the portions of the wavefunction that spread out closer to the top, flatter part of the hill? Inflation continues for longer there, and these are the regions that we can consider to be “outside” of the regions where inflation swiftly comes to an end. What does that imply, as far as regions where:inflation comes to an end and a hot Big Bang ensues, versus those where inflation continues on, unabated, even while it ends elsewhere?


Wherever inflation occurs (blue cubes), it gives rise to exponentially more regions of space with each step forward in time. Even if there are many cubes where inflation ends (red Xs), there are far more regions where inflation will continue on into the future. The fact that inflation never comes to an end absolutely everywhere is what makes inflation ‘eternal’ once it begins, and where our modern notion of a Multiverse (where the regions with a red X describe separated, disconnected universes) comes from.
Credit: E. Siegel/Beyond the Galaxy

When you work out the mathematics for getting enough inflation before a hot Big Bang ensues, this is where science tells us that the Multiverse is all but inevitable. We have to mandate that the Universe experiences enough inflation so that our Universe can exist with the properties we observe it to have. We also know that, outside of the region where inflation ended, inflation must have continued onward for longer.

Now we ask the big question, “What is the relative size of those regions?” If we compare the regions where:inflation ends at a certain time,
with the regions where inflation hasn’t ended after that time has elapsed,

we find that the latter regions, where inflation continues, are exponentially larger (and still growing with time) compared to the regions where it ends and a hot Big Bang ensues. Moreover, that size disparity continues to get worse as time goes on. Even if there are an infinite number of regions where inflation ends, there will be a larger infinity of regions where it persists. Moreover, the various regions where it ends — where hot Big Bangs occur — will all be causally disconnected, separated further by more regions of inflating space.

Put simply, if each hot Big Bang occurs in a “bubble” Universe, then the bubbles simply can never collide. What we wind up with is a larger and larger number of disconnected bubbles as time goes on, all separated by an eternally inflating space.


While many independent Universes are predicted to be created in an inflating spacetime, inflation never ends everywhere at once, but rather only in distinct, independent areas separated by space that continues to inflate. This is where the scientific motivation for a Multiverse comes from, and why no two Universes will ever collide. The Universe doesn’t expand into anything; it itself is expanding.
Credit: Ozytive/Public Domain

That’s what the Multiverse is, and why scientists accept its existence as the default position. We have overwhelming evidence for the hot Big Bang, and also that the Big Bang began with a set of conditions that don’t come with a de facto explanation. If we add in an explanation for it — cosmic inflation — then that inflating spacetime that set up and gave rise to the Big Bang makes its own set of novel predictions. Many of those predictions are borne out by observation, but other non-observable predictions also still arise as consequences of inflation.

One of them is the existence of myriads of universes, of disconnected regions each with their own hot Big Bang, that comprise what we know as a Multiverse when you take them all together. This doesn’t necessarily imply that different Universes have different rules or laws or fundamental constants, or that all the possible quantum outcomes you can imagine occur in some other pocket of the Multiverse. It doesn’t even necessarily mean that the Multiverse is physically real, as this is a prediction we cannot verify, validate, or falsify. But if:the theory of inflation is a good one, and the data says it is,
and our Universe is quantum in nature, and all evidence suggests that it is,

then a multiverse is all but inevitable. You may not like it, and you really may not like how some physicists abuse the idea, but until a better, viable alternative to inflation comes around — and until that alternative can clear those same three theoretical hurdles that inflation has already cleared — the Multiverse is very much here to stay.

Tuesday, November 09, 2021

 

Could our Universe be Someone’s Chemistry Project?

It is a pivotal time for astrophysicists, cosmologists, and philosophers alike. In the coming years, next-generation space and ground-based telescopes will come online that will use cutting-edge technology and machine learning to probe the deepest depths of the cosmos. What they find there, with any luck, will allow scientists to address some of the most enduring questions about the origins of life and the Universe itself.

Alas, one question that we may never be able to answer is the most pressing of all: if the Universe was conceived in a Big Bang, what was here before that? According to a new op-ed by Prof. Abraham Loeb (which recently appeared in Scientific American), the answer may be stranger than even the most “exotic” explanations. As he argued, the cosmos as we know it may be a “baby Universe” that was created by an advanced technological civilization in a lab!

As the former chair (2011-2020) of the astronomy department at Harvard University, the founding director of Harvard’s Black Hole Initiative (BHI), the director of the Institute for Theory and Computation (ITC) at the Harvard-Smithsonian Center for Astrophysics (CfA), and one of the chief researchers with the Galileo Project, Loeb is no stranger to “exotic” theories about advanced intelligence and cosmic origins.

His credentials also include chairing the National Academies’ Board on Physics and Astronomy, the advisory board for Breakthrough Starshot, and being a member of the President’s Council of Advisors on Science and Technology. He is also the author of the bestselling book “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth,” which addressed the possibility that the interstellar object ‘Oumuamua was an artificial probe.

This time around, it’s the foundations of the Universe itself (and whether or not aliens may have been involved) that have attracted Loeb’s interest. For starters, there have been many conjectures as to what might have existed before the Big Bang. Some of the more well-known examples include that the Universe emerged from a vacuum fluctuation or that it is a cyclic process with repeated periods of contraction and expansion – Big Bang, Big Crunch, repeat.

There is even the notion that the Universe was born from matter collapsing inside a black hole in another Universe, which then rebounded to form the other side of the Einstein–Rosen bridge (a “wormhole”) where our Universe was conceived. A similar version of this argument states that the Big Bang could have been a supermassive “white hole” that formed from a supermassive black hole (SMBH) in our parent universe.

Yet another theory is that our Universe is a consequence of the string theory interpretation of the multiverse, where infinite Universes coexist, and every possibility plays out an infinite number of times. According to Loeb, this could take the form of our Universe being created in a laboratory by an advanced civilization. “Since our universe has a flat geometry with a zero net energy, an advanced civilization could have developed a technology that created a baby universe out of nothing through quantum tunneling.”

In the context of quantum physics, tunneling refers to a phenomenon where a wave function can propagate through a potential barrier. This plays an essential role in physical phenomena, ranging from nuclear fusion and tunneling electron microscopes to quantum computing. Unfortunately, the Standard Model of particle physics models cannot resolve how quantum mechanics and gravity interact, hence why a Theory of Everything (ToE) is still lacking.

However, a sufficiently advanced species may have already developed a ToE and the technology for creating baby Universes. In essence, this theory offers a possible origin story that appeals to the religious notion of a creator and the secular notion of quantum gravity alike. It suggests that a Universe like our own – which hosts at least one civilization (i.e., us) – is like a biological system that reproduces over generations. As Loeb explained to Universe Today via email:

“It explains the Big Bang as an infinite series of baby universes born inside each other, just like chicks hatching out of eggs and laying new eggs later in their life. If something predated this series of generations – it would have been something else, just as in the ‘chicken and egg dilemma.'”

This is reminiscent of the Kardashev Scale, which characterizes civilizations by Type (I, II, and III) based on the amount of energy they can harness. Whereas Type Is are able to harness the energy of their entire planet, Type II civilizations can harness the energy of their whole star systems, and Type IIIs can harness the energy of their entire galaxy. In this case, says Loeb, the metric is a civilization’s ability to reproduce the astrophysical conditions that led to their existence.

Illustration of the Big Bang Theory
The Big Bang Theory: A history of the Universe starting from a singularity and expanding ever since. Credit: grandunificationtheory.com

For some, this whole Baby Universe theory might sound similar to the Zoo Hypothesis – a proposed resolution to the Fermi Paradox. But as Loeb explained, there’s a fundamental difference between the two:

“The Zoo is a place where you watch the animals, but a baby universe cannot be observed from the outside according to General Relativity, Einstein’s theory of gravity. The interior of the baby universe disappears from view of the creator and snaps out of the creator’s spacetime. The situation is analogous to the formation of a black hole, where all the matter that falls into it cannot be observed once it enters the black hole horizon.

“As a result the creator of the baby universe will never know which type of civilization formed in it and will also not be able to intervene. Creating a baby universe might not consume energy because the negative gravitational energy cancels out the positive energy of matter and radiation in our universe, which is characterized by a flat geometry.

“The fate of our Universe is completely independent of the baby universe, just as the history of a person that enters the event horizon of a black hole has no influence on us. Based on everything we know, our own universe will expand forever.”

Another appealing feature of this theory is the way it’s free of anthropic reasoning, which essentially states that the Universe was selected for us to exist in. Formally known as the Anthropic Principle, this stands in opposition to the Copernican Principle (or Cosmological Principle) that asserts that there is nothing special or unique about humanity or the space we occupy in the Universe. However, the mere fact that slight variations in the laws of physics would rule out life would seem to suggest that we are fortunate.

Artist view of an active supermassive black hole. Credit: ESO/L. Calçada

In recent years, it has been suggested that multiverse theory is a possible resolution for the Anthropic Principle. The Baby Universe theory is consistent with this idea, as it theorizes that the Universe gives rise to advanced civilizations that are drivers of a cosmic Darwinian selection process. At present, humanity is not advanced enough to replicate the cosmic conditions that led to our existence.

Whereas a civilization that could recreate these cosmic conditions (i.e., produce a “baby Universe” in a laboratory) would fall into class A on this proposed cosmic scale, a class B civilization could adjust the conditions in its immediate environment to be independent of its host star. Given our present situation, humanity is currently a class C or D since we cannot recreate the habitable conditions on our planet (when our Sun dies) and are carelessly destroying planet Earth through climate change.

But eventually, humanity may reach the point where we become a class A civilization and can partake in the hypothesized process of cosmic reproduction. Who knows? Maybe we will even be able to create a baby Universe that is an improvement over our own. Loeb contends that such hopes may be a tad optimistic but that the prospect for cosmic procreation presents some very inspiring possibilities:

“We are getting close to producing synthetic life in our laboratories. Once we will understand how to unify quantum mechanics and gravity, we might know how to make a baby universe in the laboratory. The ethics of making another universe would be similar to making another human being...

But ultimately, it would be flattering to our species if the abilities that past generations assigned to God, namely creating a universe and creating life in it, will be at our disposal as an advanced scientific civilization. If another civilization that predated us by a billion years had reached that goal already and we will encounter it one day, then that civilization will be a good approximation to what our past religions regarded as God.”

Further Reading: Scientific American