No questions about that. It would be tough to show 20 passes stable. We would see a lot lower clocks.:D
Printable View
Number of passes is less important than the problem size in my opinion... I can run 2GB problems all day at 1.4v (LLC off) but 5GB problem sizes will always produce a BSoD in less than 10 minutes, go figure. 20 passes at 5GB takes around an hour and twenty minutes though.
Currently my setup is averaging anywhere between 53 and 55 Gflops. The more stuff fighting for memory space the lower the number will be.
24 hours Prime95 blend stable at 1.42v. Link I didn't run Prime95 at 1.40v, but I probably should have just to see if it would show an error.
So far if LinX doesn't crash or error then Prime95 has not shown any errors either. Since then I've enabled Loadline Calibration and was able to drop my vcore down to the 1.36v in my sig, 20 runs of 5GB problem size LinX stable.
If I had a Q9450 (12MB L2) and a x48 MB, I would not have gone i7. But I had a Q6600 and and ASUS P5B-D 965-Express and was ready for a change. Plus, I had a buyer for it.
I don't like changing Motherboards often. I hook to much to my system.I am not a gamer, but a big time 24/7 multi-tasker with two TV cards and Security Cameras for 2 properties. I won't talk about the 6 internal harddrives and 5 external.
You probably have read this, the LGA 1366 has not been advertised on TV by Intel and will be pushed aside as a workstation platform. Sure it was pushed by the performance parts vendors. And, the review sites need something to talk about. The Mainstream to Performance Desktop platform will soon be out as a LGA-1156
http://chuckbam.com/Post2/LGA-1156.PNG
And, this looks like the new D0 stepping i7 920 to me.
http://chuckbam.com/Post2/xeon.PNG
I ran this all Simultaneously and no crash or throttle down. I maxed out my 12 GBs of RAM Man. Prime95 does not stress an i7 very much on it's own. You can forget it is running. I ran this in my normal TSR setup. In-fact, PerfectDisk started defraging my drives at 10 PM.
I am done testing for a while.
http://chuckbam.com/i7_P6T-D/TT051409.PNG
What's the highest gigaflops anyone has churned out yet? I have an idea to log the top 20 or so into a database. This program is so hard on the system that it should level the playing field between exteme overclockers and stability nuts. Any ideas?
well i am at 64.5gigaflops atm..
i vote the database should be 100 runs max mem at 97 percent mem.
if a draw on flops the higher mem version is the higher placed.
Sounds like a plan. A hundred passes is too much, however. I'm thinking 25 should be enough so we don't dissuade too many paeople from participating. We'll see.
I just noticed your post; I've actually had this idea for about three months now; before XS went down to do some type of gigaflops competition. Back then I was on a socket 775 and from the results I'm getting with my i7 system, the differences aren't significant even with DDR3 and imc. All the same, we could have results for different platforms, or we could make it based on strictly users and their results regardless of system.
@chuckbam
Since you have your Core i7 920 running at 3.6GHz just like I'm running my Q9450@3.6GHz I was wondering what EVEREST reports as your CPU power consumption for your Core i7 920@3.6GHz.
Here's a link to a screen shot that I posted with the results.
http://www.xtremesystems.org/forums/...postcount=2449
My maximum CPU power consumption is around 85-86 watt reported by EVEREST while LinX is running.
thought i may as well start things off with my q9550 @3.4hz
http://w7abnq.bay.livefilestore.com/...ity%20test.png
full spec:
q9550 running stock vid 1.2500 @3.4ghz
sunbeam core contact freezer with scythe sflex 1600fan.
4gb (4x1) corsair xms3 1600 @ 6-6-6-18 1280mhz
gigabyte p35t-dq6 f6 bios
corsair hx520
gainward gtx260 216 55nm
I really don't see the need to run alot of passes on LinX. It stresses the CPU so hard that if it's not stable, it's going to fail in short time.
I do see the need because we can all benefit from it. We can actually see what brand of board and memory can give us the best 24/7 stable clocks.
It's like I always say benchmarks doesn't mean nothing to me if I can't run it 24/7 stable like that. :D
Perhaps 20 passes with 1 Giga isn't enough and we have to use more passes. :shrug:
Ooh... Why I want it for each CPU type is that it's a lot easier to use an extreme edition processor and just raise the multiplier or any other processor with a higher multiplier. ;)
It will be hard to level the playing field, because the more memory you use, the more vcore is needed. Also using linx 32 bit vs 64 bit is a significant difference.
For 4.2ghz,
- I need 1.412 bios to make prime 95 stable for 14hrs see sig
- I can go 1-2 notches below that and run linx at 1Gb memory for 50 passes, as 1gb memory is near useless for testing imo
- linx using 2 gb memory, I need again 1.412 to pass 20 runs (have also passed 100 runs), and seems ~ equivalent in vcore requirement to prime 95 for 14hrs.
- linx 64 bit, using 4 gb memory, it will fail on 1st run at 1.412 (despite being prime stable 14+hrs). Need 1.425 vcore, ie 2 notches higher to pass 5 runs (and will also then pass 20 runs).
- linx 64 bit, using 5gb memory, I need 1.43 vcore to pass 5-10 runs (have also passed 30 runs). Using prime 14+hrs vcore of 1.412 = quick bsod everytime.
So not sure how you will level the playing field, when comparing 32 bit vs 64 bit linx, or comparing 1gb ram (worthless imo for testing) to 5 gb ram. Even comparing 2 to 5 gb ram will not be equivalent.
Since I now know if I can pass 5-10 passes of linx at 4-5GB, I can easily pass prime 95 (even using lower vcore), and given I have never crashed from too low vcore after being prime 14hrs stable...all my i7 stablility testing will be with quick 5 passes of linx using 4-5gb, and then a final pass of 10-20 runs using same. And times I have run much longer, it still passed, but again as already above prime stable where I have never had issues...not sure if even that much is needed.
And even if people all did such testing, all using 4-5gb with 5, 10, or 20 passes, you wont see lower overclocks, you might see a couple notches higher vcore, as to whether that is necessary or not, is another debate.
For me linx is just about efficiency. Why spend 14hrs testing with prime, when 15 mins of quick testing followed by a final ~ hr will suffice with linx.
I really like the database idea of yours. If you can accomplish it, I'm sure many would take part in that «competition». :up:
Agreed. At least, as rge noted, there should be x64 and x32 categories as the results and load differ much between those two.
Same memory/problem size value is on the one hand a good way to get comparable results (say, between different CPUs, RAMs, maybe even NBs at different clocks) and to make it somewhat fair for those without much RAM, but on the other hand if it is gonna be some sort of Top-xx then maybe there is no point in imposing such limits. :shrug:
And rge is, as always, right. The more memory, the higher stress, the more vcore requirements.
I'm not sure about that. When you take less memory you need more passes. I think it's just the time that matters.
If you take 20 passes with 1 Giga or you take 20 passes with 4 giga. The time to finish the 20 passes with 4 giga is much longer so you have more chance that it fails.
With 80 passes of 1 Giga the time to finish will be the same and you will have more chance of failure.
Yes of course because when you use more memory you have more chance that your memory gives an error.
When you use less memory and more passes you have more chance that your CPU is going to give an error because of more changes in CPU load.
It's just to have the same problem size for everyone.
As I stated in above post, at 1.412v I can pass 50 (and even 100 runs) of linx 1 gb, 100 runs are 50 minutes.
At 5 gb with 1.412v, it will fail always within 1 minute.
Running for 10 minutes at 5gb (3 passes) requires 2 notches higher vcore than running 50 minutes at 1gb (100 passes).
Best thing to do is try this yourself.
I had same issues with other cpus, which is why I stopped using small problem size testing even for initial testing to get approximate setting for different mhz's...larger problem size is more time efficient, I usually can hose an unstable OC in minutes using large problem size. Running very small problem sizes and you get the same time inefficiency as prime 95... ie, yes if you use 1000 passes with small problem size you will eventually get a failure, but then you might as well be using prime.
Than we keep only one rule and that is the amount of time to run LinX. ;)
the rule should be
1. Must run in Diagnostic Mode
2. Must be X64
3. Must 97 -98 percent of mem available.
4. Must be 2 hours min.
About the more gb vs higher clocks..
Both gives higher gigaflops.
more cores, more ram, higher clocks gives the highest. HT makes it worse atm.