The Hubble Constant: Study Finds Simpler Approach To Measuring How Fast The Universe Is Expanding

NASAGetty Images

The speed at which the universe is expanding remains one of the biggest mysteries in cosmology. Known as the Hubble constant — in honor of Edwin Hubble, who in 1929 was the first one to prove the universe is not static, as previously believed — the universe’s expansion rate has been continuously measured and calculated with the help of the Hubble Space Telescope, yielding contradictory results.

The most recent method of measuring the Hubble constant involves tracking down a class of pulsating stars called Cepheid Variables, located about 6,000 to 12,000 light-years from Earth, and measuring how bright they are. Once their brightness is calibrated, the Cepheids can be used “as cosmic yardsticks” to gauge the distance between the Milky Way and nearby galaxies that contain Cepheid stars. According to NASA, this helps us observe how fast these galaxies are moving away from us.

The next step in measuring the Hubble constant involves peering even further into the universe by using the Cepheids’ brightness to calibrate the brightness of Type Ia supernovae. These supernovae are bright enough to be seen from billions of light-years away and help measure the distance between the Milky Way and remote galaxies in the expanding universe.

The distance measurements are then compared with wavelength measurements that analyze the light coming from the Ia supernovae — essentially determining how much their light is stretched to longer wavelengths by the expansion of space — and, voila, astronomers can calculate the speed at which the universe expands with time.

It may sound simple in theory, but the process is far more complicated than it appears to be. Even though this method is hailed for its “unprecedented accuracy” and has reduced the constant’s total uncertainty to 2.4 percent (at the turn of the 21st century, the Hubble constant was known to an accuracy of 10 percent), it has nevertheless produced conflicting results.

In fact, the Gaia satellite star catalog just released by the European Space Agency — which represents the biggest map of our galaxy to date, accounting for 1.7 billion stars in the Milky Way — has deepened the confusion regarding the value of the Hubble constant, reveals a paper published last week on the scientific preprint site

These “nagging discrepancies,” as NASA has recently described them, have made astronomers wonder whether some unknown physical processes might be at fault, such as the hidden impact of dark matter particles or even the elusive sterile neutrino.

But one cosmologist says there’s no need to start changing the laws of physics just yet. John Peacock, from the University of Edinburgh in the U.K., believes these discrepancies are caused by simple unknown errors in the calculations and which can be sorted out using a “Bayesian” statistical approach, reports Quanta Magazine.

In a paper published yesterday on, Peacock and co-author Jose Luis Bernal, from the University of Barcelona in Spain, propose a meta-analysis of the conflicting Hubble constant measurements by grouping the results into separate classes depending on small differences, such as what telescope was used and what are the implicit assumptions of each team of researchers.

“Here we introduce a flexible methodology, BACCUS: BAyesian Conservative Constraints and Unknown Systematics, which deals in a conservative way with the problem of data combination, for any degree of tension between experiments,” the authors write in their paper.

Their method offers the possibility to analyze underestimated errors and biases that systematically creep in the Hubble constant calculations and figure out much they increase or decrease the measured expansion rate.

“It’s kind of the opposite of the normal legal process: all measurements are guilty until proven innocent,” Peacock explained in a statement.