How many people with "degraded" E8x's are running Micron D9 based ram?
Yes, after supplying 1.300v - 1.349v to the vcore
Yes, after supplying 1.350v - 1.399v to the vcore
Yes, after supplying 1.400v - 1.449v to the vcore
Yes, after supplying 1.450v - 1.499v to the vcore
Yes, after supplying 1.500v - 1.599v to the vcore
Yes, after supplying 1.600v or more to the vcore
No, and I run my vcore at 1.300v - 1.349v 24/7
No, and I run my vcore at 1.350v - 1.399v 24/7
No, and I run my vcore at 1.400v - 1.449v 24/7
No, and I run my vcore at 1.450v or more 24/7
How many people with "degraded" E8x's are running Micron D9 based ram?
All along the watchtower the watchmen watch the eternal return.
Me, but other ram makes no difference for me. Could be too late tho.
Why would RAM play a role in CPU degradation?
Intel Inside
Yep, I totally back you on this. Actually one of the most tortuous things I do
is video encoding/transcoding, prime95 torture does not even touch that kind
of torture. Current video codec's will use everything the CPU has to offer,
integer, floating point, mmx, sse, etc.. Where as Prime95 appears as if it is
mainly using the floating point unit.
We need a torture test that is based off video encoding, like it contains
a known video sequence of some length, with a checksum generated from
a good encode. This way we can run this encode job and generate a
checksum, and compare it with the known good checksum. This will tell
us if there were errors or not. Maybe I will work on this.
Sorry got a little off topic there.
Sandy Bridge 2500k @ 4.5ghz 1.28v | MSI p67a-gd65 B3 Mobo | Samsung ddr3 8gb |
Swiftech apogee drive II | Coolgate 120| GTX660ti w/heat killer gpu x| Seasonic x650 PSU
QX9650 @ 4ghz | P5K-E/WIFI-AP Mobo | Hyperx ddr2 1066 4gb | EVGA GTX560ti 448 core FTW @ 900mhz | OCZ 700w Modular PSU |
DD MC-TDX CPU block | DD Maze5 GPU block | Black Ice Xtreme II 240 Rad | Laing D5 Pump
Yup silicone limit when he starts to be non conductive instead of semi conductive as he is is 95 degrees and electrical microparts like transistors and so on .. the newer have 105 ? So imo yes 60 is nothing but still is better to work with 45 than 60
To the gaming / app ... yes i play games in foreground and work with photoshop / 3dstudio in bacround with affinity on specified cores imo that's more a stresstest than a simulated nomber stream and here you go core .. count !
Sony PS3 | Nintendo Wii + Nintendo Wii Fit
By Mercedes - Adventure Trips around Middle Europe in a Youngtimer | https://www.facebook.com/S.Mercedesem - Like Us, if you Like us that is
I had the same thought, and did a lot of testing with both an older GSkill kit and brand new out of the box Team Group D9GMH. The results were identical with both kits when bottom to top testing, including stock tests before and after overclocking. Neither kit has degraded. Hope that saves someone some time, I did maybe 60 runs and documented everything to be sure.
Last edited by mrcape; 03-12-2008 at 06:40 AM.
I just decided to do my weekly check for further degradation - a quick 10k fft in Orthos.
I found the weirdest thing. Not sure what it says if anything at all, but I will test more later.
In a nutshell, the box I'm on has two raptors, not in raid but both running XP PRO SP2 and a bunch of apps, and both able to run tests. They're both pretty fresh installs, within 2 weeks.
So here's the weird part - I go to test 10kffts while I take a nap, the same saved setting that passed last week - 8.5x500, 1.325vcore. The test bombs instantly and I try run blend, bombs instantly too. SO I go to bios and confirm everything is the same as I have on paper for last weeks test. Bummer. it is, and I think it's slid further down the hill. So for the hell of it, I decide to boot up into the other raptor and run the same tests with the same settings of course. What do you know, 10K passes no problem just like last week. That tells me there's some software or OS issue going on on the first raptor. Like I said earlier, the OS installs are fresh, so I kinda doubt it's that. Also, the one it passed on is my "play" drive, with music, games etc, the other is work drive which I keep pretty clean. It's backwards to what I'd expect.
Apps running were -Smart Guardian, Core Temp and Orthos.
I still need to do more research into this and see just what services are running in each install, but I think they're close to identical.
Anyway, we'll see. I've thrashed OS many times in the past and had to install to get accurate result, but it doesn't seem like it this time. I'm going to see what I can find in these conditions first and then reistall the OS to see if that's what's happened.
Last edited by mrcape; 03-12-2008 at 07:36 AM.
Sandy Bridge 2500k @ 4.5ghz 1.28v | MSI p67a-gd65 B3 Mobo | Samsung ddr3 8gb |
Swiftech apogee drive II | Coolgate 120| GTX660ti w/heat killer gpu x| Seasonic x650 PSU
QX9650 @ 4ghz | P5K-E/WIFI-AP Mobo | Hyperx ddr2 1066 4gb | EVGA GTX560ti 448 core FTW @ 900mhz | OCZ 700w Modular PSU |
DD MC-TDX CPU block | DD Maze5 GPU block | Black Ice Xtreme II 240 Rad | Laing D5 Pump
I think the idea of this thread is good, but I have seen no proof that degradation is actually the cause. Step one in all reasearch is proving that a problem exists, not just assuming it is there. You could be looking for a solution for something that might not even be a problem in the first place (the thread title says it al really).
It's not easy, you have to run a number of systems, preferrably with different specifications and with proven stability on motherboard, RAM and PSU at stock and a seperate bunch overclocked. Even then, you have to consider that no one has actually been able to produce good data on degradation in the past.
You guys are just trying stuff at random here.
You are right! The sole reason I created this thread is because Im not convenced that degradation is actually occuring (at not least for my CPU's). I am not seeing proof positive yet. However what I am seeing is my system/ CPU going in, and out of stability, and Im am, and have been seeing alot
of others saying "My CPU is degrading because It was prime stable @ 1.35v,
and now It takes 1.36v to be stable" But Im also seeing the same people
saying "Huh WTF? This week my cpu is back to prime stable @ 1.35" and
don't get me wrong I not just talking about the people posting on XS, I
am seeing these posted all over the internet.
Last edited by CrazyNutz; 03-12-2008 at 08:59 AM.
Sandy Bridge 2500k @ 4.5ghz 1.28v | MSI p67a-gd65 B3 Mobo | Samsung ddr3 8gb |
Swiftech apogee drive II | Coolgate 120| GTX660ti w/heat killer gpu x| Seasonic x650 PSU
QX9650 @ 4ghz | P5K-E/WIFI-AP Mobo | Hyperx ddr2 1066 4gb | EVGA GTX560ti 448 core FTW @ 900mhz | OCZ 700w Modular PSU |
DD MC-TDX CPU block | DD Maze5 GPU block | Black Ice Xtreme II 240 Rad | Laing D5 Pump
No, that's not it at all in my case, where did you get that idea from?
I have tested this chip hundreds of times and I know that it's degraded. The change here is that software/OS is effecting the stress test on one dreive and not the other. All that says is that it hasn't degraded MORE.
Last edited by mrcape; 03-12-2008 at 11:51 AM.
Degradation of these chips is for real, and I'll take measures to prevent it on any 24/7 system. I just want to hear the detail of the experiences and ultimately find a method to calculate the edge of where damage will occur. It might not be possible, but sharing our experiences does build a set of data, and it doesn't matter to me at all who thinks what of the data.
In any debate, there's always going to be a point where people will say that the data is no good in order to back up THEIR OWN assumptions. It can go back and forth until they will say you need to test every chip in the United States, in their back yard with CNN filming.
And even worse in this case, all the other side have for a base is hope! Hope that this chip they want to love is not damaged, hope that there's some special trick answer to the problem... it's not a bad dream people, it's really happening to the chips. Hope and no answers is worthless.
I want to hear from people who actually have the chips, have have experienced performance loss, and what the details are. Then I'll make my own data set based on their experiences. Like I said earlier, it's the best we can do without a pile of chips and endless time. Maybe it's selfish but WTF else is there?
Again, you can't go backwards to test for this to get "proof" and noones going to waste time and fresh chips doing this!
Last edited by mrcape; 03-12-2008 at 12:22 PM.
I dont stress test for hours on end but I like to push it to the max,in other words not afraid to give it some go go juice
Anyways I am watching the thread and reading and re-reading and maybe will try and pick up on something.
My hunch is that the bios's are not written well for 45nm support
Its early days yet but I am recording everything I do when I OC and perhaps in a month or so I will have some concrete proof.
I don't buy what you are saying "software/OS is effecting the stress test"
This is a quote from the author of OCCT:
bbz_Ghost : Stability testing, CPU-wise, can be done under any OS. Let me explain : in stability program, we address the CPU directly, through assembly code. We control almost everything. I'm developing under windows just to get the benefits of a few Windows procedures, be able to produce a nice GUI, etc. The only thing that can break my program is a problem with Windows's Task sheduler (and that's really unlikely). Even if it runs under Windows, if OCCT reports an error, it *does* mean the assembly code itself detected an error. So even under Windows, you get pretty good results
I find the "you want your computer to be stable under windows" remark really true too
And it's much more convenient as well
This is a little out of context so you should read the whole thing here:
http://www.hardforum.com/showthread.php?t=1145172
I too am a programmer, and I know that unless you are making windows API
calls and such windows is very unlikely to have an impact on the stability of your application (and unless some driver, or windows bug brings the whole system crashing down). Stress testing apps should have they own discreet
stress testing routines to run, and not rely on the os API's.
Sandy Bridge 2500k @ 4.5ghz 1.28v | MSI p67a-gd65 B3 Mobo | Samsung ddr3 8gb |
Swiftech apogee drive II | Coolgate 120| GTX660ti w/heat killer gpu x| Seasonic x650 PSU
QX9650 @ 4ghz | P5K-E/WIFI-AP Mobo | Hyperx ddr2 1066 4gb | EVGA GTX560ti 448 core FTW @ 900mhz | OCZ 700w Modular PSU |
DD MC-TDX CPU block | DD Maze5 GPU block | Black Ice Xtreme II 240 Rad | Laing D5 Pump
I said I need to research it and see if that's the case. That a damaged OS may be the problem with that drive, but that I doubt that there's much different from drive to drive.
I too am a programmer and I know exactly why you would have your doubts.
Let's try something constructive and time permitting I'll test and report back. Knowing my setup, and assuming I can duplicate what happened earlier with passing on one drive and not the other, what test method could prove to you there's no difference from drive to drive? I can test out the warm up theory as well if you'd like.
Lets work on it.
I dunno, doesn't dying RAM lead to BSODs regardless of the OC then? I mean bad RAM technically shouldn't even run well at stock, since well it's bad RAM?
Knock on wood for me, but what are the typical symptoms of dying D9? I mean, the end results is obviously no POST, but I don't see how increasing vcore to maintain an OC has any correlation with dying RAM?
How does more vcore on the CPU suddenly help bad RAM run better?
>> i5 750 @ 3.6Ghz | CM212Plus + P12 | P55-UD3R [BIOS F2] | 4GB G.Skill CL8 | Zotac GTX 580
.: 4 x 1TB WD | Corsair TX750 | Lian Li PC-A70A | X-Fi | Logitech Z-2300
Not necessarily. Bad ram just creates instability at random sometimes.
All along the watchtower the watchmen watch the eternal return.
Last night after my last post Prime95 ran no problem for > 5 minutes.
This morning with the EXACT same settings it will not run!
Not even for a second.
This is with a very mild FSB of 371 and default voltage settings on the Proc.
I installed no software and did not even enter the bios.
All settings are exactly as they were last night.
EDIT:
Now ten minutes after posting this Prime is running no problem.
So what is it lads?
Degradation?That just seems too far fetched to me.
Possible heat issue?Seems more likely at this point.
Software issue?Top of my list.
Last edited by TheThreeDegrees; 03-13-2008 at 02:30 AM.
I too do Video encoding, but i don't see that much more stress to the CPU. I'm doing DivX6 with experimental SSE4.1 Full Search and multithreading enabled, and x264, again with multithreading enabled.
Still, Prime95 loads the CPU harder when it comes to heat, Small FFTs just don't rely on Bus or DRAM, i guess that's why.
However, stressing the additional SSE extensions makes sense to me. A stress test, that stresses the ALU, FPU, and all SSE extensions, but that STILL runs within the CPU only (no DRAM interaction) would be nice as a CPU-only stability test.
Yes I think that you should try to duplicate what happened, If it is stable
on one drive, and not the other after recursive testing then I would agree
with the damaged OS theory. If it does prove to be stable on one or the
other then I suppose it could be due to bad chipset drivers on the failing
drive (It could be setting up the chipset registers poorly, or not at all).
BTW: I have some new Info on my findings, and I'll post the later after
I do a little more testing.
UPDATE: There is a setting in your DFI Board's BIOS "Clockgen Voltage Control"
It can supply more voltage to your clockgen chip, this helps reduce noise in the
clock signal. I think the default is 3.45v, can you up that and let me know if it
helps. My asus p5k-e has the same thing except it is called "Clock Over-Charge"
and I now have it set to +1.00v which is the max. As far as I can tell, it has made
as massive improvement (at the moment) I will see if it stays that way.
Last edited by CrazyNutz; 03-13-2008 at 07:19 AM.
Sandy Bridge 2500k @ 4.5ghz 1.28v | MSI p67a-gd65 B3 Mobo | Samsung ddr3 8gb |
Swiftech apogee drive II | Coolgate 120| GTX660ti w/heat killer gpu x| Seasonic x650 PSU
QX9650 @ 4ghz | P5K-E/WIFI-AP Mobo | Hyperx ddr2 1066 4gb | EVGA GTX560ti 448 core FTW @ 900mhz | OCZ 700w Modular PSU |
DD MC-TDX CPU block | DD Maze5 GPU block | Black Ice Xtreme II 240 Rad | Laing D5 Pump
Interesting, I only use Xvid so that may be the deal.
Also I agree on stressing the extra instructions (SSE4.1). Interesting that you
brought that up because I am considering creating a stress test program that
will test all x86 integer, floating point, mmx, and sse instructions. It will take
a little time to create since there are hundreds of instructions, plus I would
like to have redundant datasets to compare the results of each test to, one
that is statically compiled in, and one that resides on disk. This way when a test completes it will compare the computed result with the correct result
in memory, if it's good then it will move on, if not it will double check with the
correct result on disk be for it errors out. This should make a more CPU only
stability test, hopefully excluding ram errors (although if the ram is producing
a lot of errors the redundant dataset will have little hope).
Sandy Bridge 2500k @ 4.5ghz 1.28v | MSI p67a-gd65 B3 Mobo | Samsung ddr3 8gb |
Swiftech apogee drive II | Coolgate 120| GTX660ti w/heat killer gpu x| Seasonic x650 PSU
QX9650 @ 4ghz | P5K-E/WIFI-AP Mobo | Hyperx ddr2 1066 4gb | EVGA GTX560ti 448 core FTW @ 900mhz | OCZ 700w Modular PSU |
DD MC-TDX CPU block | DD Maze5 GPU block | Black Ice Xtreme II 240 Rad | Laing D5 Pump
Sandy Bridge 2500k @ 4.5ghz 1.28v | MSI p67a-gd65 B3 Mobo | Samsung ddr3 8gb |
Swiftech apogee drive II | Coolgate 120| GTX660ti w/heat killer gpu x| Seasonic x650 PSU
QX9650 @ 4ghz | P5K-E/WIFI-AP Mobo | Hyperx ddr2 1066 4gb | EVGA GTX560ti 448 core FTW @ 900mhz | OCZ 700w Modular PSU |
DD MC-TDX CPU block | DD Maze5 GPU block | Black Ice Xtreme II 240 Rad | Laing D5 Pump
I am having a similar issue where when I run Prime which was unstable for several runs, will one day be stable for 13+ hours. All settings are exactly the same and so is the temperature. I have absolutely no explanation for this.
Bookmarks