FIVE TRILLION digits..:shocked:
http://www.numberworld.org/misc_runs...nounce_en.html
FIVE TRILLION digits..:shocked:
http://www.numberworld.org/misc_runs...nounce_en.html
thats XTREME ;)
i would like to point out that his machine is still counting slower than our nation debt, lol
also, how do you compress a file with numbers that seem random?
double post
this should be the next mainstream stressing/benching app haha
The XS thread for the y-cruncher app is here:
http://www.xtremesystems.org/forums/...d.php?t=221773
Here's a couple of screenshots from the run:
Ignore the sanity check error and the low CPU utilization % - they are artifacts from splitting the computation into multiple chunks.
http://www.numberworld.org/misc_runs...ndo_verify.jpg
http://www.numberworld.org/misc_runs..._kondo_end.jpg
Am I wrong but was this ran with development version(doing the calculation in multiple chunks and resuming?) of y-cruncher, and this is the ETA poke349 hinted at in y-cruncher thread?
i just noticed... i'm guessing those voltage readings are wrong, unless that X5680 is really that godly
i wonder how this program would run on an ibm power system. in the next half year i'll get my hands on a power7 system with fully equipped building blocks for evaluation purposes. i'd love to penetrate that beast with smth like that :D
http://digg.com/programming/Pi_World...rillion_Digits
guess this frenchman only got to hold the record for half a year..
http://www.physorg.com/news182067503.html
I'm wondering how much time we could cut off of that with a pair of the same X5680's in a SR2 board running at close to 5GHz..:rofl:
http://img97.imageshack.us/img97/8044/43ycruncher.jpg
Yay I beat a record! My own! j/k
If you have enough hard drives and enough patience, you can pretty much do as much as you want - 10 trillion if you want. ;)
Memory quantity only affects the speed, not the max # of digits.
ASCII is actually a pretty "wasteful" format to store digits. So this "compression" is just getting rid of that "waste".
But otherwise, you are correct. The digits won't compress at all beyond a certain point.
I wish... But standards don't change that easily. :(
If you can convince HWbot to take a look at y-cruncher, then there might be a chance... (I have added validation and internal checks to block timer-hack cheats for the purpose of benchmarking...)
But seeing as how MaxxMem (which even has a GUI and HWbot integrated validation) still doesn't award points after 7 months... lol
Yes, it's a development version. But now it's public. The ability to stop and resume, and break it up into chunks was added in v0.5.4 - which I've also released along with this record announcement.
So yes, I've been specifically waiting for this computation to finish to release v0.5.4.
EDIT: Shigeru Kondo and I had originally intended this to be a single contiguous run. But we had a hardware error occur roughly 8 days into the computation. (YES... we had a hardware error on NON-OVERCLOCKED hardware...)
The program detected the error but wasn't able to recover from it. So we had to kill the program and restart it from the last save point.
In all the computation was broken into only 2 chunks - before and after the computation error.
Right from the start, we both anticipated some sort of hardware failure to occur at some point (especially with 16 HDs...), so I added the ability to restart a computation - and are we glad I added that feature... lolz
As of right now, it won't run well because a large scale power7 server is NUMA and the program isn't optimized for it.
Also, I would need to specially re-optimize much of the low-level code for power7. (right now, a lot of it is hard-coded SSE2 - SSE4.1 which power7 doesn't have)
I can get it compile on almost anything if I disable all the SSE and all x86/x64-specific stuff, but that also kills the efficiency...
My workstation/dev-station has 64GB :D (see siggy). It can do a 12,000,000,000 digit run all in ram. I use this machine to write and test this program. ;)
But since the program can use the disk(s), the quantity of ram have little affect on the max # of digits you can do. It only affects the computation speed...
I'm not sure it will help at all. :(
At these sizes, only 3 things seem to matter: (in order of most to least important)
- Disk Bandwidth
- Memory Quantity
- Memory Bandwidth
The first one can easily be fixed by cramming in as many hard drives as you can. (which is exactly what we did: 16 x 2TB 7200RPM Seagates for 2GB/s sequential bandwidth)
But the latter two will be limited by the hardware.
The SR-2 only supports up to 48 GB of ram.
So :
2 x X5680 @ 3.33 GHz + 96 GB ram
will probably still beat
2 x X5680 @ 5 GHz + 48 GB ram
It might be possible to pull ahead if you aggressively overclock the ram and throw in even more hard drives... but getting it stable for 90 days is a different matter.
how does hard drive speed limit this? it took 2 months to fill up a few drives
At this size (or any size that doesn't fit in ram), you need to do arithmetic on the disk itself. So in some sense, it uses the disk as memory.
And since disk speed is really slow (~100 MB/s per disk vs. ~30 GB/s for 2 x i7 bandwidth) it immediately becomes a limiting factor if you try to use the disk like ram.
i thought it would work on a few million numbers at a time, save them off and continue on.
congrats! a new benchmark ey? and more stressful than prime95? ill give it a try! :D
thanks poke for all the work you put into this, great stuff! :toast:
if what you need is disk bandwidth and not disk capacity, then why arent you using ssds? :confused:
if you use several pciE ssds you should be able to double or quadruple that bandwidth ;)
if cost is an issue, contact ocz or other hw manufacturers, they might actually be interested...
Congrats! Wondering how one of my work rigs would stack up to this. Might tool around with it later. Any chance of a Linux binary to run instead of Windows?
This is amazing stuff, but it is more amazing that Pi is over 2000 years old and we can't calculate the exact value of Pi yet :p: not even after FIVE TRILLION digits. It means our knowledge and understanding of the logic behind the circle is still very primitive, indeed.