Here is a rather unexpected way to compute pi that I've sometimes encountered when looking for massively parallel algorithms. As a foreword, if you are trying to compute the billion-th digit of pi, you can stop reading this right now

This technique is part of a class of algorithms called "Monte Carlo method" which consists in obtaining a numerical result by repeating the same experiment many times with a random input. Monte Carlo methods are widely used (in much more serious ways), but today we are only going to discuss a toy example.

To compute pi, the experiment we are going to repeat here consists in shooting a random point (x,y) within the unit square (such as x and y belong to [0,1]). The next Figure depicts this unit square as well as the unit circle (centered at (0,0) with a radius of 1) :

Assuming that we have a uniform distribution of points, the probability that a point within the unit square is also located in the unit circle corresponds to the ratio between the surface of the quarter (1) of circle and that of the square (pi/4).

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
cnt = 0;
for (i = 0 ; i < N ; i++)
{
// Shoot a point (x,y) within the unit square
x = rand([0; 1]);
y = rand([0; 1]);
// Test if the point is within the circle.
// This test has a probability of pi/4 to succeed.
if (x^2 + y^2 < 1)
{
cnt = cnt + 1;
}
}
return (4 * cnt)/N |

The previous pseudo-code illustrates how to implement this method in practice. It returns the resulting evaluation of pi according to this algorithm. While this technique is easy to implement, it must however be noted that its convergence speed is really low, so that it is not really useful in practice to compute many digits of pi. Here are some examples of output to illustrate this slow convergence rate:

10^3 iterations: PI = 3.108 (error of 3.35e-02)

10^6 iterations: PI = 3.141924 (error of 3.31e-04)

10^9 iterations: PI = 3.141608 (error of 1.57e-05)

However, this method can be interesting to numerically integrate functions with possibly multiple dimensions, as illustrated by this Figure:

Besides their slow convergence rate, the outcomes of Monte Carlo experiments heavily rely on the quality of the random numbers generators. When parallelizing such algorithms, parallel random generation may actually be the hardest point to consider.

To conclude, here is a perfect candidate if you are looking for a synthetic benchmark that will scale massively or to test the quality of a random number generator, but it does not have much use in real life besides being a rather amusing way to compute pi...

## Comments are closed.