Hey N7 :)
Just tried this on my E8400 HTPC. WCed @ 500x8 1.24v 4x1G 1Ghz 5-5-5-15 and passed 10 tests with a max temp of 53c. Easy to use and works like a charm. Nice to see results without marathon Priming LOL
Thanks AgentGOD
Printable View
Hey N7 :)
Just tried this on my E8400 HTPC. WCed @ 500x8 1.24v 4x1G 1Ghz 5-5-5-15 and passed 10 tests with a max temp of 53c. Easy to use and works like a charm. Nice to see results without marathon Priming LOL
Thanks AgentGOD
Thanks AgentGod, this package is great!
Life is too short to run 23hours 45 minutes to find a failure when IntelBurnTest does it it in just a few minutes.
I’m now able to run IntelBurnTest Ver 1.6 40 passes at max stress, without any errors, when I first moved to Vista 64 it was zero….
OMG, this thing is a real killer.
Had to raise my vcore by 0.025 to regain stability ( i hope it's enough).
Under load im at 1.31v ~ 1.32v now...
3Gb + 64bit = welcome to hell ;)
It passed 2x 10 runs- good enough for me.
http://img149.imageshack.us/img149/1...packomghy1.jpg
Has anyone tried this with Skulltrail? I have tried it with my Skulltrail using 2 x QX9775's and I can't get it to run 5 tries without blue screening. Is this thing tested for 8 core or registered DIMMS. I had my system very stable for over 3 months at 4 Ghz ... but it won't pass even when I set everything at Default. I can't imagine Intel would release something that is unstable at default values.
My hardware is Intel D5400XS with 2 x QX9775's
4 x 2GB PC6400 Mushkin FBDIMM
PS 1200 watt Thermaltake Tough Power
HSF Thermaltake Typhoon with AS 5 (both HSF and CPU's are lapped up to grit 2500)
OS Win XP64
Active cooling on memory and MCH
By the way guys, the official word on compatibility is: Intel(R) LINPACK binaries will NOT run on any processor other than an Intel one.
This is unfortunate, because I'm a great AMD fan, and all of my other PCs are AMD.
Temps are excellent with maximum core 37C with Core Temp. I had already spent a lot of time trying to keep my temperatures under control since I planned to overclock this system. I can confirm that heat is no problem also because the HS feels relatively cool to the touch even when loaded. Has anyone tested this software with an 8 core system and FBDIMMS? I tried upping the voltage from default 1.8V to 2.0V but that didn't seem to help. I have 4 sticks of Mushkin DDR2 800Mhz which is supposed to be rock stable. I can run this machine with all other software and benchmark programs at 4.2Ghz with no problem. So I am surprised that it cannot run the benchmark at default 3.2Ghz.
You meant memory usage not being 100%, right?
We need a SMALL headroom here. See, here's what I mean... if we truly allocate 100% of free memory, your system will automatically start to resort to the page file, which we DON'T want. I left some headroom in the equation (since v1.1 beta) to stop the paging from happening.
I have the same problem oc'd or running stock speeds... this has been with v1.5x to current 1.6 with the Blue Screen Error Message (i've never seen before except with this):
*** Hardware Malfunction
Call your hardware vendor for support
*** The system has halted***
With the earlier versions where one could select the amount of ram to test, I could get a 5 run pass if I selected a "small" amount of memory.
Intel D5400XS with 2 x QX9775's
4 x 2GB PC6400 Mushkin FBDIMM
PCP&C 850 (stand by for grief :p: )
Win XP Pro x64
Cooling is a little 6000btu a/c that I pipe directly into the case.
Rob O.
I havn't bsod running it but with not enough vcore I have rebooted in the middle of it, went up a hair or 2 and then it ran fine. i always run max also, uses about 3.75gb of ram. i have failed with certain dividers and frequencies previous thought stable. this proggy fins the weakness immediatly.
Thanks for the reply. Yes I get the same error message. Actually on another Skulltrail motherboard I had these CPU's running 24/7 at 4.8Ghz using a custom Vapor system. But since I have decided I wanted to use this system as my everyday computer and thus the switchover to air cooling. I would really like to get some sort of utility that will give me some solid feedback on stability, and this seemed like a great utility since it works quick relative to other solutions. Is there an earlier version that would work for 8 core systems using FBDIMMS? I have a feeling that the FBDIMMS with their extra error checking cycles that are causing the problem.
Hmm, try EVEREST and check your temps. I don't think air cooling can handle 4Ghz quadcores @ full linpack load. I have highend watercooling and still get 72°C on max load on ONE QX9650@4Ghz. (1.31vcore under max load, watertemo = 35°C).
So i HIGHLY doubt any air cooler can handle that kind of heat. (and you even have two cpus, which make ambient temps even higher)
This linpack test is really hardcore, so better get watercooling, or chilled watercooling.
well then IDK about your watercooling.
at 1.3v at 3.9ghz (on a q9450 which would probably yield 4+ on a comparably binned qx) i get absolute maxes in the low 80s on air for linpack, which, for an hour long stress test, isn't too bad imo, seeing as it barely pokes into the 50s for normal use.
I'd say with good airflow and smart voltages those procs can definitely handle this even in a skulltrail config. And i mean, this is just to declare a stable system, so high temps on this really aren't too much of an issue as long as the resid norms are consistent, at least insofar as stability.
At least give it a try.
Max stress level crashes the software on me. 1/2 stress passes no problem. I tried raising voltages on CPU and RAM but no difference. At 1/2 stress and lowered volts I get errors, why is it crashing on max load?
I ran it 10 times and my e8400 @ 4GHz passed @ 65C.
What a cooker!
@AgentGOD
many thanks for IntelBurnTest.
we love it. :D
is it possible that IntelBurnTest stopps when it find an error?
we make 50 or more tests with error detection.
so, when test 25 (for example) fail, we must wait till IntelBurnTest have finished all the 50 tests.
i hope you understand what i mean.
Just tried this out, (recommended to me by majestik) wow the name is really firtting. 50C in real temp using intelBurn where as Everest stability test doesnt go above 41C both give 100% CPU usage. I didn't seem to get any results though it just seemed like it had hung how long does a test usually take?
Depends on howmany runs you selected in the last option. Keep Everest's stability test window open to monitor CPU core temps along a graph, you will see it peak up and down for every finished test, the IntelBurnTest window wont actually show any updates till it finishes all the runs you had it do.
So can I really trusth this program for stability?
I ran it 10 times and I passed, does that mean Im 100% really stable?
Great time saving program for overclockers, i like it.
Much better than all others in my opinion.:up:
I heard from a friend at the Striker II Extreme thread that he would pass this 10 times, but while priming in less than an hour, it would fail on one core.
Hmmmm
I also heard of a few instances where people passed IBT but failed Prime.
Is there any way to turn somekind of verbosity on? It's annoying to not know what the IBT is doing and how many tests it passed. When trying to find stable clocks it would be nice to see test fail, stop the program and adjust vcore or something.
Otherwise it's a great program!! Even without knowing what the program is doing it
takes a lot of waiting off of my shoulders.
EDIT: After I wrote this post when booting back to windows (my main OS is Gentoo. There's no net in windows. It's purely for overclocking and gaming.)
there was the option to run without error detection. The option appeared to the program after I lowered my FSB enough as it wasn't there before.
Running without error detection is good way to monitor the progress, but IMHO it should verbose what it is currently doing and how many test has
been passed succesfully with the error detection on too.
I also request this.
Also, when you run it with error checking, why does it hide the information which is displayed when you run it without error checking?
It would be nice to see how long it has been running with all the info displayed and to also halt running when an error is detected.
Error checking should not be a switch but should be enabled all the time.
Stop on error should be user selectable.
All depends on how long you run IBT.
I usually run 10 loops using approx 6.6GB of RAM. (just over an hour)
It it passes, then I run 40 passes.
It that passes, I let it run 150 passes overnight.
Of course, when I am happy with the speed I want and it passes 150 loops, then I do not need to run it again :clap:
Actually at 4Ghz the QX9775 is surprisingly cool and doesn't heat up that much. Compared to my old QX6700 which can reach up to 83 degrees, these processors are relatively easy to cool and I know quite a few people with Skulltrail running at 4Ghz with the same system. These same CPU's were used in a custom built vapor system and another heavily insulated motherboard and can run 4.8Ghz 24/7 with 100% load for days on end and could boot into Windows at 5.5 Ghz. But that isn't the point, the point is that it this stress test cannot run even at default speeds and voltages with cooling that can otherwise easily handle 4 Ghz. I have even underclocked it to 2.8 Ghz from 3.2 Ghz and still it crashes.
Having said that, this software is fantastic for Quadcores. I have been able to fine tune several Quadcores using this. In one case I was able to shave .5v of my old QX6700 and still be stable at 20 runs at an overclocked 3.2 Ghz. It allowed me to adjust my radiator fan where it is barely audible and still maintain 100% stability. I can now quickly tune a Quadcore with high confidence of stability. I just wish it could be compatible with Skulltrail systems.
Yeah, im burn test stable for 20 passes but fail prime within 18min :shrug:.
I would be *very* interested to know if it is being used with success at Intel on the BOXD5400XS.
Exactly... you and me both... I'm sorry Spiro, I didn't log the specifics (i.e. IntelBurnTest version / mem amount tested / bios settings) when I did get a 5-run pass... Best I remember it was with a version where one had to enter the amount of mem to test with. And when I said "small" amount of memory, I'm guessing between 256MB and 512MB (probably using default bios settings too). I also had several runs where 2 or 3 iterations posted before blue screening (again using very little memory).
hmm run the software and got 1 error of out 20. run it again without changing anything and got 100% :(
All errors are random/probability based. It is just like prime/orthos giving an error in 45 minutes on first run, then not until 2-3 hours or in 20 minutes on the next with same settings.
I think 5 to 10 runs are great for quick adjustments, which is what I will use this program for primarily. And 20 runs (assuming setting 1) may prove you are within a notch or two of stability. From there, some will game/fold/etc as final test...others prime/orthos overnight...others 50+runs of this test.
In my post 206 in this thread...I tested at 1 notch vcore below stability, where orthos 10ffts and prime small ffts has failed at 8 and 10hrs. Intelburntest failed one 30 run at max and passed one. It made 2 errors in first 60 run, and 1 error in second 60 run but not until line 49, so may well pass a 60 run if ran enough times. Odds are also unlikely it would pass 2 60 runs (3 hours total) in a row and odds are it would fail the first run. But it is all probability of error over time.
This is a new program...and would be nice to learn what it's capabilities are for rigs other than mine. Hence it would be nice if you guys could post comparison pics so we can see settings you use and specs of your rig....like...
http://www.xtremesystems.org/forums/...&postcount=376
Otherwise what I learned from your post is :shrug:
Just wanted to give a :up: for this little proggy! My chip managed to run 6 rounds correctly on the seventh it bsoded from overheating, it was running at 88°C. :shocked:
Well done! :up:
i just tested
E8400 C0 (570x7 @1.28750V)
Ballistix PC2-6400 2x1GB (1140 CL5 @2.21V real probe)
P5Q Deluxe bios v.1306
- Stress : LV. 1
- Times to run : 10
- that really will kill my cpu and ram with extreme temperature (CPU core 77'C / RAM 57'C with the RAM's fan OCZ XTC)
result 100% pass (i tested for 3 times and all got 100%)
is that mean full stable?
i don't know how to test the full stable >_<"
-------------------------------------------------------
i will post images later, now i'm on test Othos - Blend
Thank you
Can anyone else confirm this warning for me? I have a Q6600 G0 stepping that I'm OC'ing to 8x425. Temps easily get in the 90's (°C) running IBT. I worry that it might be my OCZ Vendetta 2 air cooler -- I've had trouble mounting it in the past, might have to either lap it or invest in a bolt-thru-kit, or both. Read from someone else here that temps should not reach within 10°C of TJ Max, so if my TJ Max is 100°C then is 90°C okay? I certainly don't want to kill my chip. Maybe my OC is too aggressive?
I'm not done pushing things yet, but this is in my Shuttle SP35P2...10 passes giving IntelBurnTest access to just short of 3GB of memory, nice and quick test that I'm gonna be promoting every change I get.
http://img98.imageshack.us/img98/579...9ghzys0.th.jpg
1.12v under load and 1.168v idle ain't too bad I figure, curious to see how far I can go with full stability with these higher temps.
2AgentGOD: can 32bit apps handle /3GB key in 32 OS? Is it possible to do that by setting flag IMAGE_FILE_LARGE_ADDRESS_AWARE = 0020h?
Why the 32bit app cannot determine the maximum available RAM to test (by selecting "1.") if there more physical RAM available that virtual - i.e. that app can handle.
Use
I suspect you use only dwAvailPhys not the minimum of this.Code:GlobalMemoryStatus, then minimum of MEMORYSTATUS.dwAvailPhys and MEMORYSTATUS.dwAvailVirtual
Before any VirtualAlloc for maximum memory actually available (virtual+physical at the same time if lots of RAM) to app.
The memory limitation for 32-bit was actually introduced by Intel themselves into the 32-bit Linpack binary, and not me. So adding that switch won't do anything.
Ok, i saw it in both binaries' headers. Characteristics - 0x10F. It's no harm to fix both headers to 0x12F and fix Checksum in PE header after.
So what about memory allocation trouble? Is it done by linpack? Or you passes already allocated buffer to it? If you what about the rest of my post above? /it's not related to /3gb key. it's because only physical memory available is taken into account not virtual which can be less./
Linpack is compiled with all optimizations and what not by Intel and supplied as a binary for single processors. I don't think there is any way any of us could ever optimize it that good if we had the source. The 32-bit version isn't very optimized as it is a broader version for all x86 chips. The 64-bit binary is highly optimized for EM64T processors with SSE4.0 and above. That is the main difference between the two. 45Nm chips will work a little harder than their 65Nm counterparts in this test due to having SSE4.1.
Even if you changed the binary with a hex editor setting LMA aware flag, the binary was never compiled with more than 2GB addressing. This may or may not be a problem, but would introducing possible buffer overflows for the sake of using another 1GB memory be a wise thing to do on such a precision application? Without knowing whether Intel's source used signed long long int's or unsigned long long int's you may or may not introduce problems. If all long long int pointers are unsigned you should be fine. But I wouldn't use unsigned long long int's if I only needed to address and work with a value that a signed long long could contain, because sometimes the flexibility of negative values is necessary.
2mikeyakame, only hope that they are good coders in intel) and use x86/x64-independant sign appropriate types like *_PTR, size_t, TCHAR etc. not just "int" for everything). Since there are lots of useful defines it's quite easy to define what you want simply defyning PU[type]_PTR and no multy word definitions - hope intel programmers do alike.
That is true unless the source base is coded as universal for both *nix and Windows. There isn't too much differences between compiling for either *nix or Windows in my experience and can usually be managed with Makefiles and headers alone. Would require a lot of #if preprocessing statements throughout the headers to use both Microsoft typedefs and *nix typedefs. Makes more sense to keep things simple and stick to type definitions defined by ANSI C standards rather than OS specific. That is just my personal opinion anyway...without seeing the actual source base all I have is conclusions based on experience/simplicity, hardly credible none the less!
Edit:
Never mind the *nix bit I thought I'd read that it was also for Linux in the Intel docs. My bad.
Linpack is written in C though, not C++, so question is do all those Microsoft C++ typedefs apply to C also or are they only applicable to use under C++? I generally write code in C for *nix, and C++ for Windows. I've never really written anything in C to work in Windows so I am at a disadvantage there it seems.
They are C-compliant because they are based on define & typedef macroses and basic types + "bool" since later c compilers recognize bool type. Also maybe long long as __int64, other larger __types for SSEx also must be supported by compilers.
take a look at basetsd.h, ntdef.h from MS for short& useful definitions of common types.
[edit]
Just MS Win kernel all in plain C. And they provide headers for C cuz all drivers for Win are written in C.
Thanks for that info! I should probably look more into it when I have the time. I've never bothered using C in Windows cause I've never had the time to sit down and figure out what is applicable and what isn't. I suppose as long as the library functions ones is accessing through the code doesn't require object orientated functionality there is no real difference between C or C++ besides the code to write it, spare the few differences in internal functions that exist between different C libraries.
Fantastic program :up: , really beats the :banana::banana::banana::banana: out of my Q9450. Quick question, sometimes, Prime95 fails when IBT passed me (10x pass). Which should I believe?
10 passes is not enough.
It is like saying that Prime passed 4 hours (but fails at 4Hr 10min)
When I had 2 x 2GB ram, I was running 10 passes until it was stable at the CPU speed I wanted. Then I would run for 20 passes adjusting VCore again. Final test was 400 passes with NO errors.
Using 4 x 2GB instead of 2 x 2GB Ram made a big difference.
Running 2 x 2GB I could set the FSB on my Asus P5E WS-Pro (Q6600 G0) to 365 and run 400 passes with no errors and no increase in NB volts.
Putting those extra 2 x 2GB Ram, I had to raise the NB to 1.55 and it still got errors within 60 passes.
With a FSB of 346 and NB of 1.31, it can pass 80 passes (10 hrs) with no errors.
This also meant the VCore could be lowered and thus lower temps.
(Vista Ultimate 64bit)
You could go on testing indefinitely by that logic... nothing is ever ENOUGH per se, but this is closer.
lame I ran this using almost 8gig of ram I think 25 times, passed. Also have run prime95 in all modes while playing games etc for 36+hrs.
Comp still freakin crashes! Never buying another MSI board. Anyone else having compatibilty isues with a HD4850 + P45 + maybe 8gigs of ram?
What is stable ? 10-100-1000 passes, retesting it each week ?
Stable to me is that my Pc does everything I want it too when I want it too... it can pass linpack, yet fail on prime, or a 3dmark,... or crash in a game,...
I told this numerous times before, only a mix of programs is the better tester and daily usage is the best test. There's no definite bulletproof testing program...but with this program you don't have to wait for hours to see if something is wrong...
And guys plz stop doing 100 plus loops, what a waste of cycles and total ridicilous hardware punishment...
Hi,
I am using a Q9300 processor at 3.2 Ghz and was wondering why I am only getting a performance of roughly 21 GFlops as floatingpoint performance, while I should have roughly twice that much? this looks like the program was only using two CPUs instead of all fours despite detecting four threads and four CPU cores.
I am running the 64 bit version of Intelburntest, on Windows Server 2003 x64.
Any ideas?
This is weird,I have an ASUS Striker II Formula and when i test my E8500 with Supertalent ( Super*Talent T800UX2GC4
)DDR2 800 MHz memory it tests stable 1 time then the next it doesn't! I changed to Transcend DDR2 800 and it's Perfect everytime but the ST ram has been checked and is not Defective!I'm running both in dual channel mode,yet the ST will be perfect everytime in single channel mode! Any idea's why?
BTW,Thanks AgentGOD,exellent Stability tester!
E8500@ 3.82 Ghz,Asus Striker ll Formula 0902,2 x WD 320 GB,EVGA 9800 GTX,2 GB ST RAM,SB X-FI EXTREME Music SC,PC P&C 750 Quad P/S,2 LG DVDRW’s,Zalman9700 CNPS Cooler,Antec 1200
Increase the boards chipset voltage.
Thanks Stealth,I was wondering if maybe i needed a little more! Appreciate your Prompt reply!
IMO 10 should be done to see if you are far awary from stability
if passed, do 20, then 40 then 60 to see if you are getting there
Finally 100 to make sure...
Hello,
I am pasting my test results here, as additional data. Does anybody have a clue why I am only getting a performance of roughly 23 GFlops on a Q9300 quadcore, while a Q9550 user is getting roughly 45 GFlops? It's not like two of my CPUs were idle or so, all four are under full load.
Could it possibly be that the Q9x50 processors are coming with 12 MB L2 cache instead of my Q9300's 6 MB?
Enter the number of times to run the test (5 or more recommended)> 10
----------------------------------------------------
Executing Intel(R) Linpack 64-bit mode...
----------------------------------------------------
Intel(R) LINPACK data
Current date/time: Tue Sep 16 23:16:05 2008
CPU frequency: 3.450 GHz
Number of CPUs: 4
Number of threads: 4
Parameters are set to:
Number of tests : 1
Number of equations to solve (problem size) : 18534
Leading dimension of array : 18534
Number of trials to run : 10
Data alignment value (in Kbytes) : 4
Maximum memory requested that can be used = 2748448024, at the size = 18534
============= Timing linear equation system solver =================
Size LDA Align. Time(s) GFlops Residual Residual(norm)
18534 18534 4 190.305 22.3067 3.386492e-010 3.495055e-002
18534 18534 4 183.210 23.1706 3.386492e-010 3.495055e-002
18534 18534 4 183.008 23.1962 3.386492e-010 3.495055e-002
18534 18534 4 199.026 21.3293 3.386492e-010 3.495055e-002
The GFLOPs rating by Intel has to do with how much RAM was tested, as well as the time it took to run the tests.
Just in-case you guys are curious, here is my system's current status:
http://i34.tinypic.com/2zgcl08.jpg
I wanted to lower the vcore two notches (from previously known IntelBurnTest stable configuration), and it resulted in a BSOD after the first iteration (MACHINE_CHECK_EXCEPTION).
Even after lowering the vcore two notches, it was still hot. I turned off ThermalThrottling (CPU TM) in the BIOS for the most accurate testing.
Guess this Xigmatek is getting dusty.
P.S.: I only look at the maximum CPU temperature, not averaging them. I'm using RealTemp v2.77, and it uses 100*C Tjunction value.
Picture didn't show up for some reason:
http://i34.tinypic.com/2zgcl08.jpg
Hi AgentGOD,
thanks for replying. I am rather happy to see that there is something like Intelburntest. It dawned on me in the last 24h that the GFLOPs cannot only depend on the raw CPU power....
I am currently facing some issues here since I changed to an ASUS P5Q3 deluxe mainboard.
My current settings for CPU PLL Voltage, GPU Ref 0/2, 1/3, NB Voltage, and some others are so razor's edge that partially only one step up or down will make my system prime-unstable. Intelburntest however is a lot more tolerant and allows for a much wider range of settings without showing calculation errors.
What's the common opinion here? Can one dismiss Prime/Orthos/OCCT's errors? Priming has always been a neccessary evil to me and I would like to have it replaced by a better solution.
I saw Leghoof'd's opinion here, what is stable for you is stable enough. True. But I have seen effects at tiems on m operating systems that led me to believe that errors in the background that noone notices is slowly affecting th e reliability of one's OS overall, as in causing problems in the long shot that others would easily blame on Microsoft's incompetence. <g>
There's also nothing saying that random Prime errors aren't just that...and not very dependable. I'm still inclined to stick with IBT for a good while, but i will still be running other stresstesting apps alongside IBT for stability verification for all of those nay-sayers out there. :)
Cornelious, Singh:
It would be a bit too simple to say one or the other application causes more or less stress when it comes to P95 and IBT.
IBT, for once, uses the whole memory and constantly shoves data through the northbridge. The NB is certainly the main point in Intel systems, and is being involved in everything crucial. Let's take for granted that IBT is putting extremely high stress on the system, and let's also take for granted that P95 is making one's NB hotter. Sure, it should still pass, but is P95 still appropriate for modern chipsets? Does prime stability mean adjusting to the needs of an outdated piece of software? How well does Prime 95 actually work when dealing with completely different ways of taskmanagement in Vista and Windows XP, which came many years after Win95/Win98? And what about 64 bit systems?
Frankly, I have no clue. I just mean to sum up points to be considered. IBT uses original Intel code, and Intel knows what they are doing about their own CPUs and chipsets.
Then again, just as Singh said, there is that P95 instability in the back of my head. I've been building systems for 17 years...mostly for a hobby, but still. It's such an old habit!
Raz,
such a harsh effect looks extremely unusual. 25 MHz are nothing for a 45 nm quadcore. I've had a Maximus Formula board and now the P5Q3, so I can roughly say about your P5E3 that this should not happen. And by now it looks to me that this is a general reaction of intel systems that run DDR3 RAM. My P45 is a dieshrink compared to your X38, yet, it seems so familar.
This really looks more like an issue of Prime 95, frankly.
Guy's I do this everyday for a living,but this has me Stumped! I am running Supertalent T800UX2GC4 800MHz overlockers ram in my unit and even at stock speeds IBT passes, then fails! If i run it in Single Channel Mode it's fine,(Oced or Stock),but yet i put in 2 GB's of Transcend Jet ram in dual Channel mode and it passes everytime ,stock or Oced at 3.8 GHz! The ST Ram I've checked with memtest and every other test I have and it's fine! I ran Ortho's with 3D mark 06 looping 75 times on each benchmark continuosly and after 10 hrs of ortho's the supertalent is still running!! I don't get it,,why is such Good ram failing in Dual channel mode,Passing in single channel mode and any other Ram including my OCZ DDR2 800 Mhz passing? I tried what Stealth suggested and upped my Chipset Voltage No Diff,but this is only happening to the ST Ram! Don't get me wrong,,No Crashes,No BSOD's or any problems with the Supertalent Ram,,Just in IBT!! I thought I'd give you guy's a Smack at it as my Heads tired,lol!!
BTW,If I set my Ram usage from option 1 to just 20 MB below what IBT say's is max ram usuage,it Passes everytime! And also all Temps are very low as well,except CPU when in IBT go's to 58 degree's! Thanks to All in Advance for any Suggestions!
My Specs:
E8500@ 3.82 Ghz,Asus Striker ll Formula 0902,2 x WD 320 GB,EVGA 9800 GTX,2 GB ST RAM,SB X-FI EXTREME Music SC,PC P&C 750 Quad P/S,2 LG DVDRW’s,Zalman9700 CNPS Cooler,Antec 1200 Gaming Case
Prime95 is used in years to test stability. It's widely trusted. A stable pc run prime95 for any length of time you want without reporting error, If it failed your pc isn't stable period.
I agree that if you use your pc lightly you can't even see any BSOD or crashed but still your ALU's/FPU can't be trusted.
Can you ignore any prime error if IBT or whatever program you use runs just fine instead ? well yes, it's your pc, but if you want a trusted pc you can't.
Prime95 stress the pc in differente way to IBT, so it's reasonable that it can or can't show errors.
There's no One test to rule them all... the more different test you throw at your pc the better chances to uncover any instanbility.
There's no way to proof that a PC is stable, but more tests it passed higher is the probability that you pc is really stable.
For me make no sense ignore prime95 if it report error and trust only at IBT if the run ok, an Error is a error, if lowering the clock the test is passed that is a proof that the program is OK but your pc isn't.
I have 4gb (2x2gb) of ddr3 ram and when run under 32bit vista it crashes out soon as you start the test saying "This application has to close" but under x64 it works flawless. The system is stable (40+ IBT passes under x64 vista, 10+ hours of various prime95 tests and 20+ hours of memtest86)
any idea why its crashing in 32bit? is it cause it has more memory then the OS can see?
[edit] nvm, found the answer a few pages in, cant do max stress test under 32 bit, uses more memory then it has available.
Holy $#%#$
That was something to see my physical memory sky rocket like that in the task manager then see all 4 cores cap out at 100%.
Previously I ran Seti@home on 4 cores and I would get around 65*C. When I ran this, within seconds I was over 70* and stopped it so it didnt trigger my auto shutoff (Yes, I know. My new heatsink should be here tomorrow!)
Thanks Guys,,I found my Problem!I was suspicious that 1 of my 2 sticks may have a Bad sector or 2! Nothing that 2 New Sticks didn't fix! IBT ran 20-then 30 -then I ran 50,,Rock Solid now! Funny though Prime or Orthos didn't pck it up! Oh well perhaps not enough to show up in 9.5 hours! Thanks Again AgentGOD for the Program,,Very useful Tool in my books!!:up::D
Let me jut say one thing: The very nature of this thread does not allow me to explain all the circumstances. Just my 17 years of experience tell me that something has to be figured out with the P45/X48/X38 chipset and DDR3 memory, and it has nothing to do with my CPU clock. Been using the CPU on another board before I bought this one.
There's no need to state the seemingly obvious, but it doesn't simply WORK for the issues some are having here.
One little part of it is that the P5Q3 is plainly defaulting to DDR2 timings while using DDR3 RAM. Or how does one explain 5-5-5-15-5 appearing in the BIOS when turning the memory timings from AUTO to MANUAL.
Considering all this, your post has been pretty useless....:shakes:
Hello,
I seem to have solved the basic stability issues for my system now.
So far, any moving away from a NB voltage of 1.32V meant an increasing frequency of prime errors.
Last night my PC was priming for 7 h 20 min. stable with no errors at a NB volt. of 1.28V, using the 24.14 Prime for 64 bit Windows. the only thing I changed was....
PCI-E Frequency from 100 Mhz to 101.
I ahve heard before that this shall have solved stability issues since the ASUS P5B series, but it never did for me. Anyway, for those who are suffering prime issues, enable this for a test and let us know what it does for you. I know it doesn't sound very plausable, but it is entirely possible that this is changing a strap within your intel chipset that makes it all stable.
This applies most likely to all Intel chipsets and ASUS boards since the P965 chipsets. For example, P965, X975, P31/33/35, X38, X48, P45, etc.
System:
Intel Q9550 [3.4ghz (400x8.5) @ 1.264v]
DFI X48-T2R
8GB [4x2GB] G.Skill DDR2 [960 @ 2.1v]
Thermaltake Toughpower 850W
EVGA GTX 280 SSC
I was able to prime95 for 8hrs. Using IntelBurnTest, if I set the stress level to low (1/4 of memory) - it passed (50 passed). If I set the stress level to anything higher than 1/2 of memory, it failed (1 passed/4 failed). I set everything back to defaults. If I set the stress level to 1/2 of memory, it passed (5 passed). But if I set the stress level to Maximum stress level, it failed (1 passed/4 failed).
What does this mean?
Or you can decide 8h of prime95 / Intellburner's 1/4 result (atleast it didnt end on BSOD!!)
No, there are other reasons too.
Possible memory incompatibility with your mainboard.
Memory not qualified to run with four sticks on your board.
Insufficient Northbridge voltage.
Mainboard using stupid defaults.
A defective peripheral device on the board, or card in the system, being broken and throwing your northbridge bus system off.
Your power supply being insufficient or dying (Dead elcaps!)
And possibly more issues.
yes... at least I've got that working for me :)
The BIOS release I'm on adds support for my memory (g.skill ddr2-1000). As for running it with four sticks -> :confused:
I'll up the northbridge voltage and see what happens. PSU should be more than sufficient - I have a Thermaltake ToughPower 850W.
Hmmm... the temps on my system does raise to about the 70s - but I do not experience any blue screens, random reboots, or system hangs. It finishes what it does and tells me that it failed 80% of the passes.
Some testing here with Allendale C2D @3GHz using option 1 max stress and 2000x iterations;
375x8, MEM Linked mode 3:2 divider @1000MHz DDR, NB voltage @1.56v
@1.337v passes error free Prime95 24/7 non stop all tests (Small/Large FFT/Blend mode)
@1.337v error in IBT within 3 hours
@1.343v error in IBT within 4 hours
@1.350v error in IBT within 7 hours
MEM changed to Unlinked mode 1000MHz DDR
@1.350v error in IBT within 11 hours
@1.356v error in IBT within 16 hours
@1.362v FUBAR
@1.368v FUBAR
As you can see above, IBT stresses the system so much I had to keep increasing the vcore voltage.
Note that is also highlighted the boards NB weakness when using Linked Mode with divider @3:2, changing to Unlinked mode 1000MHz improved things although the vcore still need a little bump.
:up:
Edit:
Darnit something isn't holding up, I think it must be either the CPU's hard FSB wall of 375MHz failing, or the NB failing with that FSB/MEM divider, gonna try using a higher multiplier and reducing FSB.
333x9, MEM unlinked mode 1000MHz, NB voltage 1.56v
@1.356v Test in progress
update if anyone cares:
So I was able to pass 5 tests - the only change I had to make was for the memory to use a 1:1 divider. I was using a 5:6 divider (400fsb - 480 memory). My memory is rated for DDR2-1000, so I don't know what's going on there. But if I set my memory to DDR2-800, I can claim stability from the IntelBurnTest. I had the memory voltage at 2.1v, but reading up about the motherboard, it said to put the voltage at 2.15v to get 2.1v (real). So I'll play around with the memory voltage when I get home.
Question about memory dividers on DFI board for those who know: What's the difference between setting the memory divider to 333:666 vs 400:800?
Thanks!