Wendy Freedman. American Scientist. Volume 91, Issue 1. Jan/Feb 2003.
I may hold the distinction of being the only astronomer to have been trapped in a cage at the top of a large telescope on the 14,000-foot summit of Mauna Kea. (I know another astronomer who once fell out of such a cage, but that is another story.) Twenty years ago, before most observers spent their nights in warm computer rooms, astronomers commonly observed in very small cages at the prime focus of giant telescopes. Although the long winter nights were almost unbearably cold, we had spectacular views of the dark night sky, and we listened to music through headphones as we recorded images and spectra of our celestial targets. At the end of one night, a faulty telescope position caused an elevator to jam, making it impossible for me to leave the cage. This was no small inconvenience-the closest restroom was 40 feet below, and I was unpleasantly confined within two snowsuits. It was another seven hours before a group of engineers arrived from sea level (delayed by a flat tire), climbed up the side of the telescope’s dome and finally pried the elevator free with a crowbar. Now why would an astronomer want to subject herself to such indignities?
As an observational cosmologist, I can say that the rewards more than compensate for the occasional discomfort. The goal of our work is nothing less than trying to understand the formation and evolution of the universe. We do this through observations and experiments that ultimately provide numbers as answers-the values of cosmological parameters. These numbers can tell us something important about the universe: how much matter there is, whether the universe is curved or flat, and even how it might all end. Understanding the significance of these numbers and this curious cosmological quest for parameters requires a brief diversion into some history.
The modern science of cosmology is founded on general relativity-Albert Einstein’s theory of gravity-whose equations describe the global behavior of matter and energy, and space and time. Some solutions to these equations, notably those devised by the Russian mathematician Alexander Friedmann in the 1920s, suggest that the universe originated from a very hot, very dense state in a “big bang” explosion and that it has been expanding in size ever since. The dynamics of the expansion are expressed by the socalled Friedmann equation, which describes the evolution of the universe in terms of its density and geometry (see “Friedmann’s Equation and Cosmological Parameters,” page 42). Applying Friedmann’s equation requires that we know something about a few parameters it contains-such as H, the Hubble parameter, which defines the expansion rate; Omegam, the mass density of the universe; and Omegak, the curvature of the universe. These numbers are not inherently defined by the equation. Instead, they remain for us to measure.
Some of the first efforts to make these measurements date back to 1929, when the American astronomer Edwin Hubble discovered that our universe is indeed expanding. He showed that the farther a galaxy is from us, the faster it is speeding away. This velocity distance relation came to be called Hubble’s law, and the value that describes its current rate of expansion is H0. Hubble was the first to measure H0 (which wasn’t named as such at the time)-deriving a value of 500 kilometers per second per megaparsec. (A parsec is equal to 3.26 light-years.) For various reasons, Hubble’s result was far off the mark, but even a couple of years ago, estimates for H0 varied by a factor of two, generally ranging between 50 and 100 (the values are usually stated without the units of measure).
This lack of precision was problematic because H0 is a key parameter needed to estimate both the age and size of the universe. A twofold range in H0 yields an unacceptably wide span for the age of the universe-anywhere from 10 to 20 billion years. Such uncertainty also puts few constraints on cosmological models.
But all of this is changing. The value of the H0, along with some other cosmological parameters, is becoming increasingly accessible to accurate measurement as new technologies allow us to see farther into the universe than ever before. The Hubble Space Telescope (HST), which was launched in 1990, is among these technological breakthroughs. One of the primary reasons the HST was built was to determine a more accurate value for H0. This “Key Project” of the HST program was an enormous effort, involving 30 astronomers (I was one of three coleaders), spanning eight years of work and about 1,000 hours of HST time. It was the largest project tackled by HST in its first decade, and it was finally completed in 2001.
A Variable and a Constant
In principle, the Hubble constant should be a straightforward calculation. It only requires the measurement of a galaxy’s distance and velocity. In practice, however, devising a method to measure distances over cosmological scales is far from trivial. Even relatively simple velocity measurements are complicated by the fact that galaxies tend to have other galaxies as neighbors, and so they interact gravitationally, perturbing one another’s motions. These peculiar velocities are distinct from the recession velocities (the Hubble flow) of galaxies in the expanding universe and this effect must be accounted for or minimized.
A galaxy’s velocity is calculated from the observed shift of lines in its spectrum (the pattern of electromagnetic radiation it emits at different wavelengths). Galaxies that are moving away from us emit light that is shifted to longer (redder) wavelengths because it is stretched, or “redshifted,” by the recession. The greater the shift in wavelength, the faster the galaxy’s velocity. Since the velocity of recession is proportional to its distance (Hubble’s law again), the farther the distance measurements can be made, the smaller the proportional impact of peculiar velocities on the overall expansion velocity. Astronomers can further reduce the uncertainty by observing a number of galaxies distributed across the sky so that the peculiar motions can be averaged out.
Measuring distances presents a greater challenge. The universe is so large that there is no direct way to measure its full size. There is no cosmological equivalent to a land surveyor’s rangefinder-no single method can provide a measure of the universe’s absolute size. Instead, astronomers rely on a series of techniques, each of which is suitable for a certain range of distances, and together these methods constitute the “cosmological distance ladder.”
For the nearest stars, distances can be measured by trigonometric parallax, which uses the baseline of the Earth’s orbit for triangulating a star’s distance using simple, high-school trigonometry. Distant stars in our galaxy and extragalactic objects require other, less direct indicators of distance. In these instances, astronomers rely on objects that exhibit a constant brightness-socalled “standard candles”-or those whose brightness is related to some quality of the object that is independent of distance, such as its period of oscillation or its rotation rate or its color. The standard candles must then be independently calibrated to an absolute unit of measure so that the true distance can be determined.
The most precise method for measuring distances is based on the observation of Cepheid variables-stars whose atmospheres pulsate in a very regular way for periods from 2 to more than 100 days. In the early part of the 20th century, the American astronomer Henrietta Leavitt discovered a relation between the average intrinsic brightness (or luminosity) of a Cepheid and its pulsation period: Brighter stars had longer periods (see above). Knowing its intrinsic brightness, astronomers can deduce a Cepheid’s distance because the star’s apparent brightness decreases with the inverse square of its distance. Cepheids also happen to be intrinsically bright stars, so they can be observed in galaxies outside the Milky Way. In fact, Hubble discovered other galaxies outside of the Milky Way by measuring Cepheid variables. These and other distances enabled him to determine that the universe is expanding.
The key to observing Cepheids in other galaxies is a telescope with sufficient resolving power to distinguish these stars from others that contribute to the overall light of the galaxy This is where the HST came to play a central role. Because it orbits above our planet’s turbulent atmosphere, the space telescope’s resolution is about ten times better than that obtained by telescopes on Earth. Thus the HST opened up the possibility of observing Cepheids in a volume of extragalactic space a thousandfold greater than previously possible. (Recall that volume increases with the cube of the linear distance.) With the HST, Cepheids can be measured out to the nearest massive clusters of galaxies about 30 megaparsecs away. Beyond this distance, other methods are needed to extend the extragalactic distance scale.
Three of these methods rely on the global properties of spiral and elliptical galaxies. For example, the Tully-Fisher relation states that the rotational velocity of a spiral galaxy is correlated to its luminosity: Intrinsically bright galaxies rotate faster than dim ones. This relation has been measured for hundreds of galaxies, and there appears to be an excellent correlation. There is an analogous relation for elliptical galaxies, in which the stars in the brightest galaxies tend to have a greater range of orbital velocities (a high velocity dispersion). A third method takes advantage of the fact that the ability to resolve the stars in a galaxy decreases as its distance increases. For example, an image of a nearby galaxy might have an average of 10 stars per pixel (or individual picture element), whereas a distant galaxy would have a larger number, perhaps 1,000 stars for every pixel. The near galaxy would appear grainy with relatively large fluctuations in its overall surface brightness, whereas the distant galaxy would appear smoother. Each of these methods can be usefully applied for galaxies up to 150 megaparsecs away
Among the most promising cosmological distance indicators is the peak brightness of type Ia supernovae. These explosions occur in a binary star system when material from a companion star falls onto a white dwarf star. The extra mass exceeds the white dwarf’s level of stability (the Chandrasekhar limit), causing it to collapse. This detonates the explosive burning of carbon, and the entire star blows up, briefly shining as brightly as a whole galaxy. The shape of the supernova’s light curve (a plot of how its brightness changes with time) is indicative of its peak luminosity: Bright supernovae tend to have shallower curves (just as bright Cepheids have longer periods), and the relative luminosity of the supernova can be determined quite accurately. Because supernovae are so bright, they can be used to measure H0 out to where recession velocities approach 30,000 kilometers per second (about 400 megaparsecs), and the effects of a galaxy’s peculiar motion drop to less than one percent. (Peculiar velocities of galaxies typically amount to about 200 to 300 kilometers per second.)
Another kind of stellar explosion, a type II supernova, can also serve as a distance indicator. Type II supernovae are produced by massive stars of various sizes and show a wider range of luminosity than type Ia supernovae. Although they are not standard candles, type II supernovae can reveal their distance through spectroscopic measurements of their expanding atmospheres and photometric measures of their angular size. These supernovae currently provide distances to 200 megaparsecs.
These distance indicators provide a means of measuring the relative distances to the galaxies. As with any map, however, we need an absolute scale. The calibration for all these methods is currently based on the Cepheid distance scale-the bottom rung on the distance ladder-and so these methods are considered to be secondary. (In principle, the type II supernovae can provide absolute measures of distance, but they were calibrated to the Cepheids for our work.) With one exception, all of the secondary distance indicators are calibrated directly by measuring Cepheid distances in galaxies that display one or more of the properties employed by the secondary method. The velocity-dispersion technique for elliptical galaxies cannot be calibrated directly by Cepheids. Instead, this method was indirectly calibrated by the Cepheid distances to clusters of galaxies containing these elliptical galaxies, and it has the largest uncertainties.
H0 = 72
Although each of the secondary distance methods provides a estimate of H0 on its own, the HST Key Project was designed to avoid the pitfalls of relying on a single method, so our study combined the results of the various approaches. Even so, readers should note that there is a reasonable level of agreement for the value of H0 among the different methods: Cepheids, 75; type la supernovae, 71; Tully-Fisher relation, 71; velocity dispersion in elliptical galaxies, 82; surface brightness fluctuations, 70; and type II supernovae, 72. A weighted average of these values yields an H0 of 72 +/- 8. This convergence can be seen graphically and is especially notable for galaxies with redshifted velocities beyond 5,000 kilometers per second (about 70 megaparsecs away), where the effects of peculiar motions are small compared to the Hubble flow.
So what does 72 mean? Recall that H0 is the current expansion rate of the universe. Although the Hubble constant is the most important parameter in constraining the age of the universe, determining a precise age requires that we know how the current expansion rate differs from its rate in the past. If the expansion has slowed or accelerated, then calculations of its age must take this into consideration.
Until recently, cosmologists generally believed that the gravitational force of all the stuff in the universe has been slowing its expansion. In this view the expansion would have been faster in the past, so the estimated age of the universe would be younger than if it had always been expanding at the same rate. (Because a faster rate would allow the universe to reach the “same place” in less time.) And this deceleration is what astronomers expected to find as they looked farther out into the universe, and further back in time.
We get an inkling that something isn’t right with this picture if we calculate the age of the universe assuming that it is slowing down. With a Hubble constant of 72, a decelerating universe turns out to be only 9 billion years old. The problem is that we know of stars in our galaxy that are at least 12 billion years old. Since the stars cannot be older than the universe, something must be wrong. The good agreement among several independent ways of estimating the ages of ancient stars gives astronomers confidence that these stellar ages are not in error.
The resolution of the problem turns out to be a newly discovered property of the universe itself. In 1998, two groups of astronomers studying distant supernovae reported something remarkable: Type Ia supernovae that are far out in the universe appear to be dimmer than expected. Although it’s possible (though there is no evidence) that supernovae were intrinsically dimmer in the distant past, the simplest explanation is that these explosions are actually farther away than they would be if the universe were really slowing down. Instead of decelerating, the type Ia supernovae suggest that the expansion of the universe is accelerating. This acceleration has now been supported by further studies, and many astronomers now believe that there must be a previously unrecognized repulsive force in the universe-something that acts against gravity. It is being called dark energy.
To a certain extent the existence of dark energy may have been anticipated by Einstein. Even before the expansion of the universe was discovered, Einstein’s original equations that described the evolution of the universe contained a term that he called the cosmological constant (Lambda, lambda). Because the astronomers of his day assured him that the universe was not in motion, Einstein introduced the term to prevent any expansion or contraction, which would result naturally from the effects of gravity. (The cosmological constant, Lambda, appears in the Friedmann equation.) When Hubble discovered the expansion, Einstein apparently referred to the cosmological constant as his greatest blunder: It didn’t seem necessary, and he had missed the opportunity to predict the expansion.
Until a few years ago, cosmologists generally set the A term to zero in Friedmann’s equation. However, the discovery that the universe is accelerating suggests that the term may have been necessary after all. The cosmological constant may represent dark energy, or the vacuum energy density (OmegaLambda). OmegaLambda has some very curious properties. It can bend space in much the same way that matter does, and so contribute to the overall geometry of the universe, but it exerts a “negative pressure” that causes the accelerated expansion we observe.
So how do we estimate the expansion age of the universe in light of these results? Using the Friedmann equation, an accurate estimate requires not only the value of H0, but also the density parameters, OmegaLambda and OmegaLambda, and the curvature term, Omegak. Inflationary theory (a very successful cosmological model that posits an extremely rapid expansion very early in the universe) and observations of the cosmic background radiation currently support a so-called “flat” universe, where Omegak = 0, so the curvature term drops out. In a flat universe Omegam + OmegaLambda = 1 (by definition).
The mass density of the universe, Omegam, must be determined by observation and experiment. Figuring this one out is tricky. The rotational velocities of galaxies and the dynamics of galactic clusters suggest that the visible matter in the universe, the stuff that makes up stars and bright nebulae, constitutes only a fraction of its total mass. The rest appears to be some form of invisible material, or dark matter, which interacts gravitationally with these luminous bodies and so alters the dynamics of galaxies and clusters of galaxies. Stars and bright nebulae appear to account for merely 1 percent of the matter and energy in the universe. Another 4 percent might be accounted for by nonluminous bodies (some of the dark matter), such as planet-like bodies or warm intergalactic gas. This normal matter (made of baryons, such as protons and neutrons) adds up to no more than 5 percent of the critical density. Another 25 percent (and the rest of the dark matter) appears to be in the form of exotic (non-baryonic) matter, which is believed to consist of as yet unknown particles that interact with baryonic matter almost exclusively through gravity. That brings the total mass density of the universe to 30 percent (or Omegam = 0.3). So, in a flat universe, the vacuum energy density must be about 70 percent (OmegaLambda = 0.7) of the total mass-plus-energy density. Now, by integrating the Friedmann equation with these density values, and an H0 value of 72, we derive an age for the universe of about 13 +/- 1 billion years, a value that agrees nicely with the ages of the oldest stars.
A few decades ago, the universe seemed a much simpler place. It appeared to be composed of ordinary matter, and the expansion of the universe could be described by the Hubble constant and the matter density alone. Today there is powerful evidence that non-baryonic matter makes up about one third of the total mass– plus-energy density. And new data suggest that the universe is accelerating, pointing to the existence of a mysterious dark energy that makes up most of the other two thirds. As yet, theory can provide no explanation for the dark energy. In fact, calculations based on modern particle physics disagree wildly with the observations (and this was the case even when the cosmological constant was thought to be zero). Hence, astronomical observations are hinting at fundamentally new physics, and a universe in which 95 percent of the total mass and energy is in new exotic forms.
It is an exciting time in cosmology Gone are the days, perhaps, of the lone astronomer sitting in a prime-focus cage. But a suite of planned observations and experiments are ushering in a new era with unprecedented precision. Significant improvements to the measurement of the Hubble constant will come by the end of the decade with the launch of new satellite interferometers-SIM, the Space Interferometry Mission, planned by NASA, and GAIA, a European Space Agency project. These instruments will be capable of delivering 100 to 1,000 times more accurate trigonometric parallax distances to Cepheids within our galaxy-measurements that are used to calibrate the extragalactic Cepheids. The Cepheid calibration is currently the largest remaining uncertainty in the HST Key Project measurement of H0.
Many experiments are searching for the weakly interacting particles that could be the dark matter. Large teams are making careful measurements of the universe’s acceleration, and plans are under way to build a satellite largely dedicated to this effort. Encoded within the small fluctuations of the cosmic microwave background radiation is information about all of these cosmological parameters. An explosion of technical capabilities is yielding completely independent measures of these parameters and checks on the other methods. We may not yet have a complete view, but there is no question that we are in the midst of a revolution in our thinking about the nature of the universe we live in.