I thought the newest was 0.5.5.2 which is posted later in the thread?
Printable View
Yes, but the same link is in the 1st post too.
is there a way to run a problem size of 10000 but using the least amount of memory, as to only test the cpu itself. Like what prime95 does when you run small ftt's in which it doesn't use ram just stresses cpu? When i set the problem to 10000 the ram used is 775mbs
I'm afraid not. Linpack solves a system of linear equations (a matrix with dimensions Problem Size x Leading Dimensions, 10000 x 10008 in your case) where every operand requires a fixed amount of memory (8 bytes). 99% of memory consumed by Linpack is used just to store that matrix. So problem size and memory are bound together.
Good news for you is that Linpack is pretty light on memory stressing. Minor memory instabilities usually don't cause errors in Linpack.
No, that should be impossible on principle. Higher problem size means higher ram requirements, because linpack can only run its tests in system memory.
Edit: NVM.. more technical explanation above ^^
is not really a draw back , the applications runs pretty good and it is way faster than any other program in detecting errors, just wondering though since i often test the ram or NB stability (not OC'ing the cpu) then i run a test stressing the cpu only to see if its ok. But since since linx is so quick at detecting errors it makes up for it.
so, whats the idea behind using a big amount of memory during linx/linpack tests? if memory is not stressed too much and therefor linpack does not tell us that much on memory instabilities, wouldn't it be better to focus mainly on testing cpu stability and use as little memory as possible?
i have to run memtest anyway to verify memory stability (or if someone prefers online testing: occt or prime)!
That's yet another Linpack peculiarity. The more memory it consumes, the harsher it stresses the CPU. This is because Linpack seems to have some sort of different "phases" during each run, where the most stressful one is in the very middle, while there are also some time-fixed phases when it does almost nothing, judging by temps: at least two, in the beginning and at the end of each test run (my guess is they're caused by memory load/save operations with huge amounts of data; in the middle mostly cache is accessed). With higher problem sizes/memory amounts that "real stress" phase takes significantly more time, at least percent-wise.
At low problem sizes there's very little continuous load, but even then Linpack does detect instability, it just needs much more time for that (the chance of getting a BSOD is also lower though).
Dualist explained why need for increased memory, but just to give a real world example on difference of testing memory at different sizes.
I need 1.41vcore bios setting (LLC on) for 4.2ghz prime stable 12+hrs (small and blend). To get linpack 64bit run of 20 stable at 2200 mem (problem size 17000) or larger i need to increase vcore 1 notch higher. I am then also stable at 60 runs, and still stable using more memory/higher problem size. But if I use only 1000 mem (11500 problem size) I can drop vcore ~3 notches below stable and make it 20 runs no problem, though would fail eventually I assume. If using less than 1000 mem, I can drop vcore even lower 4-5 notches below prime stable and still make a 20 run. That is using linpack 64 bit. Using 32 bit, the same is true except, I might need an extra notch of vcore to be prime stable after getting linpack32 to run 20 stable.
Using linpack 64 bit, from now on a 20-30 run linpack at 2200 mem and higher, I will assume is prime 12 hrs stable (cpu wise, it works on all my stable points). I can quickly get all my stable points using that short test, then run a final confirmatory test with 12hr prime (which also checks other issues other than cpu).
ok folks, i see:
so its always a good idea to run this test on max mem to get worst case scenario!
but this is still stressing the cpu! to get overall system stability there is the need of running some other tests too, like prime, occt, orthos and of course memtest to nail down memory issues!
guess i got i now.
regarding the memory load/unload at the beginning and end of each pass. always wondered why linx/linpack had this drop in cpuload and of course core temps at the beginning of each pass, compared to prime or even coredamage where load socks up to 100% and does never get down until the test is stopped.
that makes sense now ...
Im running problem size of 10000 and adding one notch up in vcore gets me through one or two more problems solved then fails. Is it going to stop.. or the more problems i run the m ore vcore is going to need?
If i underclock my cpu and only run a high FSB, as to test chipset and memory stability, can i still run linX and get a good indication, or does linx only test CPU
My experience on my I7 and previous cpus E8600/E8400, once I get to about 2300mem or 17000 problem size, I quit needing more vcore as I go up further in problem size to get it stable, but others may vary.
But running mem of 1000 (or equiv problem size of ~10000) I need a couple notches more vcore going up to 2200 mem. In fact now, I only test at problem size around 17000, it is faster at crashing unstable OC's.
On core i7, once I am linpack stable, I am prime blend stable without adjusting anything else, but it has integrated mem controller....and of course I am running my mem at stock...so it is no worry.
I you are using FSB, ie core duo/quad then you may or may not be stable with mem/FSB,NB if pass linx. Personally I would run prime or OCCT once I got cpu stable, to double check rest.
yeah makes sense, i run prime blend to test some NB, mem stability and ususally after test two or so it detects errors.. but yeah i managed to pass 10 problems of 10000 with no errors so i guess that equals to like 12 hours of prime small ftts. as before i ran small ftts in prime for hours with no errors and 3 passes of 10000 got me errors. Im guessing if i pass 10 without problems im good.
I'm an old timer with overclocking, all this new software is blowing my mind, so quick question, if I can pass 20 runs @ 10K and 20 runs @ 9K back to back, does it mean my system is fairly stable?
I realize this only tests the CPU, and not memory, so I'll use P95 for that, but from a CPU aspect... it's okay?
Thanks!
yes that means that is stable. I was able to run 10 passes of 10k stable and this translates to almost 10 hours or so stable in prime95 small ftts.
LinX 0.5.6 is here:
- interface fixes and enhancements, Linpack mode (x32/x64) and number of threads are now displayed in statusbar
- CPU name is now displayed in statusbar
- Linpack mode is now mentioned in text log too
Maybe someday we'll see clocks right after that CPU name... ;)
There's also a mini-mode now. I have no clue who might want to use it, but it looks nice with glass in Vista and Win7. If you're interested look in the ini for EnableMiniMode, once enabled you'll be able to switch between 2 modes via Settings menu and right-mouse button menu in full and mini-mode respectively. And note that it might be buggy like every new thing I do. :)
Attachment 93713 (looks so very cyan on Win7 :) )
Well, 20 runs are usually a good enough indication of stability, but you should consider running, say, 50 runs and with more memory to be completely sure that it's stable. Linpack is just like Prime, the more the better.
Works great!
another LinX run bites the dust... this one is only 20 runs instead of my normal 25.
What about this - >
http://img25.imageshack.us/my.php?im...tableliez0.jpg
Stable or not?
With linx 32 bit, once I am prime stable, I have been linx stable. With linx 64 bit, as you are using, it is a little harsher than prime on mine, I had to raise vcore 2 notches to be linx 64 bit stable...(when using large problems sizes which stress cpu the most, like you are doing.)
What do you guys think about a spartan interface with all checkboxes moved to the Settings window like this:
Attachment 94644
I just have a feeling that nobody uses them anyway, or at least not too frequently. Not sure only about the x64 option, but again on 32-bit systems it's absolutely useless, while on 64-bit systems, which usually have 4+ GB of RAM, who'd want to run 32-bit Linpack with its memory limitations?
Or simply leave it all the way it is now? :shrug:
I use the "stop on error" switch a lot and also the # of threads.. when I am testing the GFX card simultaneously to get an idea of the system's total heat production for instance, I need to leave 1 core/thread free or LinX will grab it all :D