It's just a Big-O... I didn't put any of the constants in there.
Basically:
With "double", w = 53. The algorithm becomes impractical when n goes above a billion. The asymptote is reached when n reaches ~25 trillion digits. (give or take a factor of 10 or so...)
With "float", w = 24. The algorithm becomes becomes impractical at just a few thousand digits. The asymptote is reached when n goes to a mere 60,000 digits... (again, give or take a factor of 10 or so...)
This is just one of several major algorithms in the program.
The program doesn't rely on it solely. So 25 trillion digits isn't the limit of the program.
EDIT:
Basically, when "w" is much larger than "log(n)", the complexity is roughly O(n log(n)). That's where the algorithm rules since it's quasi-linear.
So as long as you stay in that range, the algorithm remains efficient.
But as you push higher and higher, the "log(n)" starts creeping up. Eventually, it becomes inefficient. Then impractical...
And when n is large enough such that w = log(n), the algorithm fails completely.
This algorithm is called FFT multiplication.
Virtually all fast pi-programs use it, SuperPi, PiFast, QuickPi, etc...
All implementations of it stay in the "efficient" sizes where "log(n)" is much smaller than "w".





Reply With Quote

Bookmarks