How many digits are satisfactory in the measurement of pi?

In the 3rd century BC, Archimedes proved that the ratio of a circle’s circumference to its diameter is less than 3 1/7 but larger than 3 10/71. That’s about 3.141. Later mathematicians have computed what we now call p (pi) to greater and greater accuracy — but how many digits are “enough”?

The answer is 45 digits. That’s sufficient for one of the most grandiose calculations using π that I can imagine — namely, finding the circumference, C, of the known universe down to the width of a single quark. Not even one-thousandth the size of a proton, this subatomic particle is no more than 1 × 1018 meters across. The universe has a radius, r, of some 14 billion lightyears, or 1.3 × 1026 meters, and thus a diameter twice as large that needs 45 digits to express in quark units. If we knew the diameter to this level of precision (which we don’t) and wanted to compute the circumference just as accurately, we would need π to the same number of digits.

According to Petr Beckmann’s 1971 classic, A History of Pi, English astronomer Abraham Sharp was the first to evaluate π past 35 digits; he attained 72 in the year 1705. The quest for more digits is still going on, and in 2002 Yasumasa Kanada and colleagues (University of Tokyo; see super-computing.org) surpassed 1 trillion digits. Their motivation is mathematical curiosity, not practicality!

— Roger W. Sinnott

COMMENT