Can "early dark energy" save the expanding Universe? – Big Think

If you measure the distant galaxies found throughout the Universe, you find that the cosmos is expanding at one particular rate: ~74 km/s/Mpc.
If you instead measure what the Universe was like when it was very young, and determine how the light has been stretched by the Universe’s expansion, you get a different rate: ~67 km/s/Mpc.
This 9% disagreement has reached the “gold standard” for evidence, and now demands an explanation. “Early dark energy” might be exactly it.
Whenever you have a puzzle, you have every right to expect that any and all correct methods should lead to you to same solution. This applies not only to the puzzles we create for our fellow humans here on Earth, but also to the deepest puzzles that nature has to offer. One of the greatest challenges we can dare to pursue is to uncover how the Universe has expanded throughout its history: from the Big Bang all the way up to today.
You can imagine starting at the beginning, evolving the Universe forward according to the laws of physics, and measuring those earliest signals and their imprints on the Universe to determine how it has expanded over time. Alternatively, you can imagine starting here and now, looking out at the distant objects as we see them receding from us, and then drawing conclusions as to how the Universe has expanded from that.
Both of these methods rely on the same laws of physics, the same underlying theory of gravity, the same cosmic ingredients, and even the same equations as one another. And yet, when we actually perform our observations and make those critical measurements, we get two completely different answers that don’t agree with one another. This is, in many ways, the most pressing cosmic conundrum of our time. But there’s still a possibility that no one is mistaken and everyone is doing the science right. The entire controversy over the expanding Universe could go away if just one new thing is true: if there was some form of “early dark energy” in the Universe. Here’s why so many people are compelled by the idea.
One of the great theoretical developments of modern astrophysics and cosmology comes straight out of general relativity and just one simple realization: that the Universe, on the largest cosmic scales, is both:
As soon as you make those two assumptions, the Einstein field equations — the equations that govern how the curvature and expansion of spacetime and the matter and energy contents of the Universe are related to each other — reduce to very simple, straightforward rules.
Those rules teach us that the Universe cannot be static, but rather must be either expanding or contracting, and that measuring the Universe itself is the only way to determine which scenario is true. Furthermore, measuring how the expansion rate has changed over time teaches you what’s present in our Universe and in what relative amounts. Similarly, if you know how the Universe expands at any one point in its history, and also what all the different forms of matter and energy are present in the Universe, you can determine how it has expanded and how it will expand at any point in the past or future. It’s an incredibly powerful piece of theoretical weaponry.
One strategy is as straightforward as it gets.
First, you measure the distances to the astronomical objects that you can take those measurements of directly.
Then, you try to find correlations between intrinsic properties of those objects that you can easily measure, like how long a variable star takes to brighten to its maximum, fade to a minimum, and then re-brighten to its maximum again, as well as something that’s more difficult to measure, like how intrinsically bright that object is.
Next, you find those same types of objects farther away, like in galaxies other than the Milky Way, and you use the measurements you can make — along with your knowledge of how observed brightness and distance are related to one another — to determine the distance to those galaxies.
Afterward, you measure extremely bright events or properties of those galaxies, like how their surface brightnesses fluctuate, how the stars within them revolve around the galactic center, or how certain bright events, like supernovae, occur within them.
And finally, you look for those same signatures in faraway galaxies, again hoping to use the nearby objects to “anchor” your more distant observations, providing you with a way to measure the distances to very faraway objects while also being able to measure how much the Universe has cumulatively expanded over the time from when the light was emitted to when it arrives at our eyes.
We call this method the cosmic distance ladder, since each “rung” on the ladder is straightforward but moving to the next one farther out relies on the sturdiness of the rung beneath it. For a long time, an enormous number of rungs were required to go out to the farthest distances in the Universe, and it was exceedingly difficult to reach distances of a billion light-years or more.
With recent advances in not only telescope technology and observational techniques, but also in understanding the uncertainties surrounding the individual measurements, we’ve been able to completely revolutionize distance ladder science.
About 40 years ago, there were perhaps seven or eight rungs on the distance ladder, they brought you out to distances of under a billion light-years, and the uncertainty in the rate of expansion of the Universe was about a factor of 2: between 50 and 100 km/s/Mpc.
Two decades ago, the results of the Hubble Space Telescope Key Project were released and the number of necessary rungs was brought down to about five, distances brought you out to a few billion light-years, and the uncertainty in the expansion rate reduced to a much smaller value: between 65 and 79 km/s/Mpc.
Today, however, there are only three rungs needed on the distance ladder, as we can go directly from measuring the parallax of variable stars (such as Cepheids), which tells us the distance to them, to measuring those same classes of stars in nearby galaxies (where those galaxies have contained at least one type Ia supernova), to measuring type Ia supernovae out to the farthest reaches of the distant Universe where we can see them: up to tens of billions of light-years away.
Through a Herculean set of efforts by a myriad of observational astronomers, all of the uncertainties that had plagued these differing sets of observations for so longs have been thoroughly reduced and reduced and reduced further, until now, in 2022, each one of those sources of error is now below even the ~1% level. All told, the expansion rate, through the distance ladder method is now robustly determined to be about 73 km/s/Mpc, with an uncertainty of merely ±1 km/s/Mpc atop that. For the first time in history, the cosmic distance ladder, from the present day looking back more than 10 billion years in cosmic history, has given us the expansion rate of the Universe to such a high precision.
Meanwhile, there’s a completely different method we can use to independently “solve” the exact same puzzle: the early relic method. When the hot Big Bang begins, the Universe is almost, but not quite perfectly, uniform. While the temperatures and densities are initially the same everywhere, in all locations and in all directions, to 99.997% precision, there are those tiny ~0.003% imperfections in both of them.
Theoretically, they were generated by cosmic inflation, which predicts their spectrum very accurately. Dynamically, the regions of slightly higher-than-average density will preferentially attract more and more matter into them, leading to the gravitational growth of structure and, eventually, the entire cosmic web. However, the presence of two types of matter — normal and dark matter both — as well as radiation, which collides with normal matter but not with dark matter, causes what we call “acoustic peaks,” meaning that the matter tries to collapse, but rebounds, creating a series of peaks-and-valleys in the densities we observe on various scales.
These peaks-and-valleys show up in two places at very early times.
They show up in the leftover glow from the Big Bang: the cosmic microwave background. When we look at the temperature fluctuations, or the departures from the average (2.725 K) temperature in the radiation leftover from the Big Bang, we find that they’re roughly of that ~0.003% magnitude on large cosmic scales, then rise to a maximum down on smaller angular scales of about ~1 degree, then decreases again, rises, falls, rises, etc., for a total of about seven acoustic peaks. The size and scale of these peaks, calculable from when the Universe was only 380,000 years old, then comes to us at present dependent solely on how the Universe has expanded from that time the light was emitted, all the way back then, to the present day, 13.8 billion years later.
They show up in the large-scale clustering of galaxies, where that original ~1 degree-scale peak has now expanded to correspond to a distance of around 500 million light-years. Wherever you have a galaxy, you’re somewhat more likely to find another galaxy 500 million light-years away than you are to find one either 400 million or 600 million light-years away: evidence of that very same imprint. By tracing how that distance scale has changed as the Universe has expanded by using a standard “ruler” instead of a standard “candle” we can know how the Universe has expanded over its history.
The issue with this is that, whether you use the cosmic microwave background or the features we see in the large-scale structure of the Universe, you get a consistent answer: 67 km/s/Mpc, with an uncertainty of only ±0.7 km/s/Mpc, or ~1%.
That’s the problem. That’s the puzzle. We have two fundamentally different ways of how the Universe has expanded over its history, and each one is entirely self-consistent. All distance ladder methods and all early relic methods give the same answers as one another, and those answers fundamentally disagree between those two methods.
If there truly are no major errors that either sets of teams are making, then something simply doesn’t add up about our understanding of how the Universe has expanded. From 380,000 years after the Big Bang to the present day, 13.8 billion years later, we know:
Unless there’s a mistake somewhere that we haven’t identified, it’s extremely difficult to concoct an explanation that reconciles these two classes of measurements without invoking some sort of new, exotic physics.
The reason why this is such a puzzle is as follows.
If we know what’s in the Universe, in terms of normal matter, dark matter, radiation, neutrinos, and dark energy, then we know how the Universe expanded from the Big Bang until the emission of the cosmic microwave background, and from the emission of the cosmic microwave background until the present day.
That first step, from the Big Bang until the emission of the cosmic microwave background, sets the acoustic scale (the scales of the peaks and valleys), and that’s a scale that we measure directly at a variety of cosmic times. We know how the Universe expanded from 380,000 years of age to the present, and “67 km/s/Mpc” is the only value that gives you the right acoustic scale at those early times.
Meanwhile, that second step, from after the cosmic microwave background was emitted until now, can be measured directly from stars, galaxies, and stellar explosions, and “73 km/s/Mpc” is the only value that gives you the right expansion rate. There are no changes you can make in that regime, including changes to how dark energy behaves (within the already-existing observational constraints), that can account for this discrepancy.
But what you can do is change the physics of what happened in that first step: during the time that takes place in between the first moments of the Big Bang and what occurs when the light from the cosmic microwave background scatters off of an ionized electron for the final time.
During those first 380,000 years of the Universe, we traditionally make a simple assumption: that matter, both normal and dark, as well as radiation, in the form of both photons and neutrinos, are the only important energy components of the Universe that matter. If you start the Universe off in a hot, dense, rapidly expanding state with these four types of energy, in the corresponding proportions that we observe them to have today, you’ll arrive at the Universe we know at the time the cosmic microwave background is emitted: with the overdensities and underdensities of the magnitude we see at that epoch.
But what if we’re wrong? What if it wasn’t just matter and radiation during that time, but what if there was also some significant amount of energy inherent to the fabric of space itself? That would change the expansion rate, increasing it at early times, which would correspondingly increase the “scale” at which these underdensities and overdensities reach a maximum. In other words, it would change the size of the acoustic peaks that we see.
And what, then, would that mean?
If we didn’t know it was there, and we assumed there was no “early dark energy” when in actuality there was, we’d draw an incorrect conclusion: we’d conclude that the Universe expanded at an incorrect rate, because we were accounting for the different components of energy that were present incorrectly.
An early form of dark energy, that later decayed away to matter and/or radiation instead, would have expanded to a different, larger size in the same amount of time compared to what we would have naively expected. As a result, when we make a statement like, “this was the size and scale that the Universe had expanded to after 380,000 years,” we’d actually be off.
Then you can go and ask another step: could you be off by, say, 9%, or the amount you’d need to be off by to explain the discrepancy in the two different ways of measuring the expansion rate?
And the answer, even with the copious observational constraints we have at our disposal today, is a resounding yes. Simply assuming there was no “early dark energy,” if in fact there was, could fully and easily account for the inferred difference in measuring the expansion rate of the Universe via these two different methods.
That doesn’t mean, of course, that there was an early form of dark energy that:
But, and this is important, we also have only very loose constraints on such a scenario; there’s pretty much no evidence that exists that rules it out.
When you put all the pieces of your puzzle together and you’re still left with a missing piece, the most powerful theoretical step you can take is to figure out, with the minimum number of extra additions, what one extra component you could add in to complete it. We’ve already added dark matter and dark energy to the cosmic picture, and are only just now discovering that maybe, quite possibly, that isn’t enough to resolve the issues we’re having. With just one more ingredient — and there are many possible incarnations of how it could manifest — the existence of some form of early dark energy could finally bring the Universe into balance. It’s not a sure thing, but in an era where the evidence can no longer be ignored, it’s time to start considering that there may be even more to the Universe than anyone has realized before.


Leave a Reply

Your email address will not be published.