PDA

View Full Version : Water Block Testing Woes (Charts/Errors Present)



Martinm210
08-19-2008, 06:59 AM
Over the coarse of the last two weeks I've been through 40+ mounts of different blocks and really pulling my hair out trying to repeat tests.

With 18 sensors and logging capabilities, I thought this would be a breeze. But upon trying to validated my first runs, I continued to find the results difficult to duplicate any better than about .5C.

Anal as I am, I decided to completely discard my previous work and start over. Yet again I'm plagued with this accuracy precision problem.

I was having some issues with TIM compound consistency, so I switch, I was also having some problems with OCCT paging the processor, so I switched to prime 95, and mounting I was going to make some adjustments.

Anyhow, I decided to make another all night run to look for trends and this is what I'm finding.

First up the actual run with the sensors averaged. 8 each air inlet sensors, 2 water sensors, and the 4 core DTS sensors.

http://img146.imageshack.us/img146/8134/wbtestingwoesgl3.gif

And this is a special plot to figure out error present through motherboard heat loss that is due to changes in ambient. I'm finding anywhere from .22 to .27C error per degree present with this.
http://img146.imageshack.us/img146/6263/wbtestingwoes2dg1.gif

That's all find and dandy, I can correct for that, but what does all this look like when plotted over a variable ambient temperature. Just how does the 1C resolution on the 4 cores carry forward in accuracy.

So I plotted the corrected water/core delta over time, but more importantly over a change in ambient temperature. The 30minute moving average is still only good to about .5C:(

I'm afraid as long as we're dealing with 1C resolution sensors on the processor, it's always going to be hard to get much accuracy in testing.

http://img146.imageshack.us/img146/4505/wbtestingwoes3su2.gif

Now how do you go about sorting out 12 blocks in a 3C window?

You don't very well, and it probably doesn't matter anyway regarding overclock performance...:shrug:

Anyhow, thought I'd share my findings. Testing waterblock thermal performance is not much fun when you're trying to split hairs..

.5C error easily present in the core resolution, then you add another .2C per degree for ambient (Can be corrected), then you add who knows what for TIM application and mounting pressure error.

Now I'm wondering if you should purposely make ambient vary and run all night for a better average...:shrug:

nikhsub1
08-19-2008, 07:27 AM
No testing will every be perfect. The simple fact is, the point of repeated mounts is to get a trend line, and ONLY a trend line. None of this testing is absolute, although everyone here wishes it were it will never be. This is why I try to test at the same ambient temps... things skew weirdly when you don't. I'm not sure how well or necessary your correction factor works, it just adds more to the already convoluted data IMO. I personally like to keep it simple, this is why I chart EACH mount so one can see mount variances in a line, well you've seen it, you did my graphs for me LOL. With these graphs you can see TRENDS not absolutes. As blocks shrink the performance gap, one can not say block A is clearly better than block B, the lines are getting blurry and will likely only get more muddled, not clearer. Here is some old hat for reference.

http://anonforums.com/builds/teststation/nikhsub1mount.png

BlueAqua
08-19-2008, 07:50 AM
Great testing you guys. Testing this gear involves almost unlimited variables that you've tried to limit. The testing you guys have done really gathered the data that we can use to establish good/not so good equipment. I think much more at this point is just splitting hairs. Nice post Martin.

jiffy2
08-19-2008, 08:42 AM
With 18 sensors and logging capabilities, I thought this would be a breeze. But upon trying to validated my first runs, I continued to find the results difficult to duplicate any better than about .5C.

By this, do you mean that the error is +/- .5*C per mount? If so, keep in mind that by averaging 5 mounts, the standard deviation is reduced by a factor of sqrt(5), so your new error will be +/-.22*C.

You may also be able to use statistical inference. t-procedures can be used by taking the standard deviation of the waterblock, then, for a 95% confidence interval, use mean +/-2.776 standard deviations. This works if there are no outliers or skewness in the data.

If two confidence intervals overlap, then the difference between the two is not statistically significant. If there is no overlap, then a conclusion can be drawn about which one is better with 95% confidence.

http://en.wikipedia.org/wiki/Normal_Distribution
http://en.wikipedia.org/wiki/T_distribution
http://en.wikipedia.org/wiki/Confidence_interval



http://img146.imageshack.us/img146/6263/wbtestingwoes2dg1.gif

What's the R^2 value for that regression? If it's far away from 1, then it might be better to not use the correction.


http://img146.imageshack.us/img146/4505/wbtestingwoes3su2.gif

The black line there is interesting, because it appears sinusoidal in nature... If you average it out, it's not much of a problem, obviously, but do you know what caused that?


Now I'm wondering if you should purposely make ambient vary and run all night for a better average...:shrug:
It would make more sense to keep the ambient constant, if at all possible. Even if you do let it run all night, the ambient temperatures will vary from day to day, and will take more time to test.

orclev
08-19-2008, 09:26 AM
Hmm, lies, damn lies, and statistics. I hate having to rely on a statistical analysis of products (even though it's largely unavoidable, what with things like MTBF being purely statistical in nature).

jiffy2
08-19-2008, 09:35 AM
My original post was ambiguous, this is what I meant by it.

Even if you do let it run all night, the ambient temperatures will vary from day to day [thus logging overnight to obtain a range of ambient temperatures won't do a whole lot, because the results won't be comparable across different mounts]. It will also take more time to test [because you're logging temps all night. This would pretty much limit you to 1 or 2 mounts per day.].



I think it could be just the opposite if you consider the longer you run the test, the longer something could get in the way of your loading software getting disrupted. I really do think more of the anomalies we are seeing are due to that fact than they are anything else. And I'm not saying the other things don't matter for consistency because they do.

But if your loading software doesn't stay constant, you could have the system in a lab grade chamber and your results would vary. My above graph makes that point. I've seen it time and time again. It probably took me 20 hours to clean my system up to where it wasn't being paged all the time, and I still have to watch out for it on every testing run.

andyc
I agree with this. The black line that I pointed out above as being sinusoidal could be that way due to the temperature loading program (that was my initial thought). It could also be due to the changing ambient temperatures. The ambient temperature can change instantly, thus showing a difference in the temperature delta before the system can reach an equilibrium.

jiffy2
08-19-2008, 09:56 AM
Hmm, lies, damn lies, and statistics. I hate having to rely on a statistical analysis of products (even though it's largely unavoidable, what with things like MTBF being purely statistical in nature).
Using probability and statistics is fine as long as the testing methodology and procedures are stated clearly and performed correctly. MTBF, for example, is useless without knowing the methods used to find it. Lots of extrapolation is used, certain operating conditions are assumed, and it's an exponential random variable. Probability is used all the time in science because it's an extremely important tool for data analysis. There is a degree of uncertainty in any experiment; probability & statistics allows one to analyse the uncertainty in order to draw conclusions.


OK thanks,

I was a little thrown off by the terms. I've never had formal training in statistical analysis, so thanks for taking the time to explain it.

it would be good to learn more about it, so I'll start reading up.

andyc
Probability & statistics is one of the most useful branches of math, but make sure you read into the misuses of statistics...there are a lot of ways to create bias, and different methods can only be used in certain situations (you can't perform inference on data that's highly skewed or with outliers, for example).

orclev
08-19-2008, 10:08 AM
Probability & statistics is one of the most useful branches of math, but make sure you read into the misuses of statistics...there are a lot of ways to create bias, and different methods can only be used in certain situations (you can't perform inference on data that's highly skewed or with outliers, for example).
Which is exactly why I'm inherently distrustful of statistics, more often than not they're used by marketing departments to confuse the general public due to not being able to spot subtle flaws in statistical analysis and arguments. I understand it's a useful tool, just one that I think should be approached very cautiously, as it's used to spread lies more often than it is to glean genuinely useful information.

In this context of course, you can ignore my comment, it's more me being snarky because of my pessimistic impression of the field, and should not be interpreted as implying anything about anyone in this forum, or any of the analysis on here.

orclev
08-19-2008, 10:13 AM
Funny you should mention that because I've always believed you could twist the numbers any way you wanted to make a point if taken out of context.

I think the US goverment would be a prime example for miss using statistical data to make their point regardless of the reality of a situation, or the media when it comes to promoting the "fear factor":up:

andyc
Exactly, statistics make my bull:banana::banana::banana::banana: sense tingle. See my previous post and disclaimer for how that relates to this forum though.

ShadedNine
08-19-2008, 10:19 AM
Which is exactly why I'm inherently distrustful of statistics, more often than not they're used by marketing departments to confuse the general public due to not being able to spot subtle flaws in statistical analysis and arguments. I understand it's a useful tool, just one that I think should be approached very cautiously, as it's used to spread lies more often than it is to glean genuinely useful information.

In this context of course, you can ignore my comment, it's more me being snarky because of my pessimistic impression of the field, and should not be interpreted as implying anything about anyone in this forum, or any of the analysis on here.

This is the difference between science and spin. If you try and pull statistical tricks in academia, you'll get called out on it during the peer review process and will pretty much destroy all interest in your project. With marketing, they like to give you statistic summaries (80% of x is y), but they do NOT open their results up to analysis. I'm not saying this process or academia in general is perfect, but you'll find a LOT less BS in a scientific journal than you will in a faux news broadcast.

If you're worried about statistical error or deliberate bias in these test results, Martin seems pretty open to having his process reviewed, in fact this thread seems an effort by him to sort of vent his frustrations, gather input, and give us insight into the challenges and assumptions one has to make in this sort of experiment. Granted, you don't have video of him performing the experiments, for those with really tight tinfoil hats on...

jiffy2
08-19-2008, 10:22 AM
Funny you should mention that because I've always believed you could twist the numbers any way you wanted to make a point if taken out of context.

I think the US goverment would be a prime example for miss using statistical data to make their point regardless of the reality of a situation, or the media when it comes to promoting the "fear factor":up:

andyc
http://en.wikipedia.org/wiki/Opinion_poll
^ Check out the "potential for inaccuracy" section there. Some people purposefully introduce those inaccuracies in order to get the results they want.

In an opinion poll where there are 2 possible responses (like yes or no), then in an ideal situation, 2500 people can be used to accurately predict the opinion of an INFINITELY large population to +/- 1%. Of course, this is rarely the case.


Which is exactly why I'm inherently distrustful of statistics, more often than not they're used by marketing departments to confuse the general public due to not being able to spot subtle flaws in statistical analysis and arguments. I understand it's a useful tool, just one that I think should be approached very cautiously, as it's used to spread lies more often than it is to glean genuinely useful information.

In this context of course, you can ignore my comment, it's more me being snarky because of my pessimistic impression of the field, and should not be interpreted as implying anything about anyone in this forum, or any of the analysis on here.
Yeah, I definitely understand where you're coming from here. No offense was taken. :)

orclev
08-19-2008, 10:34 AM
I don't think anyone is saying anything is wrong with the numbers or Martin's statictics, I know I wasn't. And I don't believe orclev was either.

andyc

Correct, I have great respect for both you and martins work (among others) and have no doubt that both of you have done everything you can to eliminate bias or misleading results from your analysis. My statements were meant to be taken as a general statement concerning the field of statistics as it's used and presented to the general public, something I think was not obvious in my original statement, hence the follow up posts.

Edit: I'd also like to thank jiffy2 for his comments earlier on some statistical methods that could prove useful in analyzing martins results. It's obvious from his posts he has a good understanding of statistics (something I can not claim), and I'm sure his insight will prove valuable as we all attempt to better understand the data being collected.

ShadedNine
08-19-2008, 10:45 AM
My statements were meant to be taken as a general statement concerning the field of statistics as it's used and presented to the general public

Here you're quite right. Sadly, the 'general public' is rarely interested in analyzing the experimental method and statistical approaches used in a study (or performing any critical thinking really). Mostly, they're simply told what to think. There's absolutely nothing wrong with not trusting things you hear from media, friends (although we have a well-known bias for accepting in these circumstances, even though their sources are no more reliable), and even academia. This doesn't mean it should be taken as incorrect, but this is the reason behind the importance of revealing sources: so you can check. Even if you don't have an advanced degree in stastical analysis, you can at least confirm that it's been through some sort of peer review process.

As for the statistics in this particular instance, I'll second Jiffy2, in that increasing your sample size is all that's necessary in order to compensate for the deviations in the results. Repeated mounts of a single block should be normally distributed, and the blocks should all share a relatively similar variance (since the variance is caused by ambient and other factors), a Levene's test can confirm this. You just have to be careful to avoid experimental bias. Ideally, you should be randomizing the mounts of the various blocks (each should get a particular # of mounts, but they should be chosen in a random order). As you increase your sample size, you should be able to start making inferences using ANOVA (analysis of variance). What real difference does the sample size make? It's the difference between saying "X was better than Y *this time*", "X is better than Y *occasionally*" and "I'm 99% certain that X is better than Y". You may end up with the same graph displaying the temps of each block, but you can be far more certain of the reliability of that graph.

Using statistical analysis software (such as SPSS) might be a help. It makes it a lot easier to perform transformations to compensate for various factors (ambient), as well as crunching the numbers for ANOVA and various other tests.

brinox
08-19-2008, 10:54 AM
ive had a small amount of statistics exposure, and even though its been a while, i still understood most of what jiffy2 said. thanks! that was quite a refresher.

with that, i am by no means competent to comment without a lot of doubt, but the sinusoidal line occurs because of the correction made by Martin to represent a trend-line. taking the derivative of the equation that best represents that sinusoidal line would thereby show us such a trendline. (I think, i fully admit i could be completely wrong, but IIRC thats the simplest way to estimate the trend of the said data)

ShadedNine
08-19-2008, 11:12 AM
taking the derivative of the equation that best represents that sinusoidal line would thereby show us such a trendline. (I think, i fully admit i could be completely wrong, but IIRC thats the simplest way to estimate the trend of the said data)

It's a good try, but the derivative of sin(x) is cos(x), which is effectively just shifting the line. That line is the trendline. That it's a smooth line is simply the nature of temperatures (they aren't discrete), and the lack of a pattern in the slope is because this experiment is stochastic.

jiffy2
08-19-2008, 11:15 AM
ive had a small amount of statistics exposure, and even though its been a while, i still understood most of what jiffy2 said. thanks! that was quite a refresher.

with that, i am by no means competent to comment without a lot of doubt, but the sinusoidal line occurs because of the correction made by Martin to represent a trend-line. taking the derivative of the equation that best represents that sinusoidal line would thereby show us such a trendline. (I think, i fully admit i could be completely wrong, but IIRC thats the simplest way to estimate the trend of the said data)
Well, the derivative of sin[x] is cos[x] (and the derivative of cos[x] is -sin[x]), so we'd get a different sinusoidal curve, whereas the trend line was linear.

Ideally, the data should be as flat as possible, however, his initial data showed that there was a slope to it. He then applied that trend line to transform the data into a horizontal line (to eliminate variation caused by ambient temperature changes). The problem is, the transformation turned out sinusoidal, indicating that the original data was not a straight line, but oscillatory in nature (like a rotated sine wave). This indicates that there's something else causing error.

Of course, I could just be missing something here.

Oh, and thanks to everyone for being so polite. :)

Edit:

It's a good try, but the derivative of sin(x) is cos(x), which is effectively just shifting the line. That line is the trendline. That it's a smooth line is simply the nature of temperatures (they aren't discrete), and the lack of a pattern in the slope is because this experiment is stochastic.

Beat me to it. :) I take way too long writing my responses.

orclev
08-19-2008, 11:26 AM
But if your loading software doesn't stay constent, you could have the system in a lab grade chamber and your results would very. My above graph makes that point. I've seen it time and time again. It probably took me 20 hours to clean my system up to were it wasn't being paged all the time, and I still have to watch out for it on every testing run.

andyc

Even though it's less reflective of the actual use, maybe using a known heat source (something simple like a heating element) rather than an actual CPU might simplify data gathering? After all, it's not really the CPU under test here, but the waterblocks ability to dissipate heat into a cooling loop. On the downside you lose the in chip heatprobes, but considering that they're already of dubious accuracy to begin with, maybe that's really a blessing? It also eliminates the need to try to maintain a consistent load across the cores, and allows for easier adjustment of the heatload on the system.

Edit: It also occurs to me that instead of running something like Prime95 inside of Windows, maybe a better solution is to come up with a simple low level program that can be booted and put load on the cores without requiring a full OS. On the downside it does require the program to be written, and it really only exercises the CPU and not the entire system, but for the purpose of putting a constant heatload on the waterblock it serves its purpose. For a bonus it could also log the Tj deltas as it goes and display a summary at the end of the run, or possibly log the values to disk, although at that point you'd almost need to be running a minimal OS like a stripped down linux kernel just to simplify the disk IO.

jiffy2
08-19-2008, 11:40 AM
Even though it's less reflective of the actual use, maybe using a known heat source (something simple like a heating element) rather than an actual CPU might simplify data gathering? After all, it's not really the CPU under test here, but the waterblocks ability to dissipate heat into a cooling loop. On the downside you lose the in chip heatprobes, but considering that they're already of dubious accuracy to begin with, maybe that's really a blessing? It also eliminates the need to try to maintain a consistent load across the cores, and allows for easier adjustment of the heatload on the system.
This is definitely easier to get good data from, but one problem is the area of the heat load. The relative performance of waterblocks changes when the area the heat load is being applied to changes (single core vs dual core vs quad core, etc.). Even if you could emulate the area of the heat load (a die simulator), modern processors have an internal heat spreader. Die simulators were common before processors has IHSes, but now they're less relevant. Data obtained in this way could potentially be used to supplement the real world testing results, however.

Edit:


Edit: It also occurs to me that instead of running something like Prime95 inside of Windows, maybe a better solution is to come up with a simple low level program that can be booted and put load on the cores without requiring a full OS. On the downside it does require the program to be written, and it really only exercises the CPU and not the entire system, but for the purpose of putting a constant heatload on the waterblock it serves its purpose. For a bonus it could also log the Tj deltas as it goes and display a summary at the end of the run, or possibly log the values to disk, although at that point you'd almost need to be running a minimal OS like a stripped down linux kernel just to simplify the disk IO.
I don't know enough about programming to know how someone would do something like that, but it'd be interesting to see if something simple like using a different OS could give more reliable results.

orclev
08-19-2008, 11:50 AM
... Even if you could emulate the area of the heat load (a die simulator), modern processors have an internal heat spreader.

Hmm, might be interesting to obtain a thermal image of the top of a processor IHS under load. Of course, I'm not sure how you would do that without burning out said processor due to lack of heatsink. I wonder how effective the IHS is at spreading the heat evenly.


I don't know enough about programming to know how someone would do something like that, but it'd be interesting to see if something simple like using a different OS could give more reliable results.
Well, if AndyC's measurements are accurate, and I have no reason to doubt they are, it could make quite a difference. I'm a programmer by profession, so even with taking pains to try to prevent expensive paging operations, I know there's all kinds of things a OS is doing behind the scenes that can throw a load operation off. In theory, if the process scheduler is working at 100% efficiency then there should always be something executing, and thus no difference, but in practice that's rarely the case. I'll have to give it some thought. Maybe I can strip down the Minix kernel and put something together, it would certainly be simpler and quicker than trying to retrofit a Linux kernel to such a dedicated task, although licensing issues might be a problem with using Minix.

Martinm210
08-19-2008, 12:10 PM
Great discussion. More than anything I was trying to understand two things with these long runs. One was this ambient error I'm finding. I noticed this in one prior test and wanted to try it again. I was thinking this was motherboard heat, but on the way out of town today I was thinking, maybe this error is processor efficiency. I've always heard that a processor works harder as it gets hotter. Maybe this .2C per degree is simply the processor itself??

The second thing I was after was how long I should log temperatures for. The sinusoidal line was simple a moving 30minute average which I picked because the last couple of weeks I was testing for 30min long logs and it represents some of the varience I could expect at this 30minute length. But after running the chart and thinking more about it, I think the waves are really nothing more that the 1C resolution given by the processor cores. As the core cooles down it's going to stairstep it's way down, but since I plotted water to core temps, rather than an abrupt step, it appears as these waves.

In addition I'm thinking there might be an advantage to variable ambients because of this 1C error. If you hold a very constant ambient in all tests you could potentiall be somewhere on the high or on the low side of this resolution rounding thing that's going on. But if you plot the results over a wide range of ambients, preferrable over several hours in length, then you'll have the ability to average out some of that error.

Anyhow, that's what I've been thinking. I'm going to stop using my 30minute logging test and purposely try logging tests overnight to see how it turns out.

More than anything I just want to be able to pick up any block I've previously testing and get the same result again a day later, or at least within something like .3C at worst.

jiffy2
08-19-2008, 12:44 PM
I was thinking this was motherboard heat, but on the way out of town today I was thinking, maybe this error is processor efficiency. I've always heard that a processor works harder as it gets hotter. Maybe this .2C per degree is simply the processor itself??

That makes a lot of sense. Even though the efficiency change isn't large, it could be enough to create a .2*C difference.


But after running the chart and thinking more about it, I think the waves are really nothing more that the 1C resolution given by the processor cores. As the core cools down it's going to stairstep it's way down, but since I plotted water to core temps, rather than an abrupt step, it appears as these waves.

D'oh. I can't believe I didn't think of that.


In addition I'm thinking there might be an advantage to variable ambients because of this 1C error. If you hold a very constant ambient in all tests you could potential be somewhere on the high or on the low side of this resolution rounding thing that's going on. But if you plot the results over a wide range of ambients, preferrable over several hours in length, then you'll have the ability to average out some of that error.

The temperature difference from one night to another might cause problems. Your regression might be able to fix the error though.

Anyhow, that's what I've been thinking. I'm going to stop using my 30minute logging test and purposely try logging tests overnight to see how it turns out.

More than anything I just want to be able to pick up any block I've previously testing and get the same result again a day later, or at least within something like .3C at worst.
Yeah, log them overnight and see how they turn out. Depending on how things turn out, you can try other things.

I'm not sure if you'll be able to get your error down to .3*C though. If you do I'll be impressed (I'm actually impressed that you managed to get it to .5*C). Even .5*C across 5 mounts would probably be enough to draw conclusions.

ShadedNine
08-19-2008, 03:00 PM
On the subject of increasing 'load reliability', while a direct bootable load test would be ideal, you could probably get a step closer to that with a whole lot less work by using TinyLinux or PenDriveLinux and MPrime, or even BartPE with Prime95. Either one could just be booted from a USB memory stick. I'm guessing the problem with these approaches is the reporting tools being used and what they support (If there's windows only though, maybe BartPE would still be enough).

orclev
08-19-2008, 03:19 PM
That was my initial thought, but it doesn't really solve the underlying problem. No matter what the OS is still going to be stealing cycles to run process schedulers and other background tasks. Now, a realtime OS (there's a modified realtime linux kernel project somewhere) would get you a bit closer, as at least it's usually designed to perform the bare minimum process interruption, but even that's only a slight improvement over a standard non-realtime OS. I've got the source code for Linux and Minix, and I'm sure I could find a decent open source load testing utility like Prime95, so it should be fairly simple to create a stripped down barebones pseudo OS for testing. I wouldn't really even need to worry about a process scheduler, or anything along those lines, a simple IPC micro-kernel ala Minix, combined with a loading/logging service and a basic disk driver should cut it. It might seem like a lot, but I think the total work required to write a minimal system based around this would be less than what would be required to strip out the unnecessary portions of a general purpose OS.

ShadedNine
08-19-2008, 04:22 PM
I've got the source code for Linux and Minix, and I'm sure I could find a decent open source load testing utility like Prime95, so it should be fairly simple to create a stripped down barebones pseudo OS for testing. I wouldn't really even need to worry about a process scheduler, or anything along those lines, a simple IPC micro-kernel ala Minix, combined with a loading/logging service and a basic disk driver should cut it. It might seem like a lot, but I think the total work required to write a minimal system based around this would be less than what would be required to strip out the unnecessary portions of a general purpose OS.

If you want to give it a shot, MPrime is open source and is the exact same algorithm as Prime95, which is just the windows version of MPrime with a gui. First though, I'd make sure you can still get access to something that will read and log temperature values from the motherboard, otherwise it may lose a lot of usefulness.

orclev
08-19-2008, 04:26 PM
If you want to give it a shot, MPrime is open source and is the exact same algorithm as Prime95, which is just the windows version of MPrime with a gui. First though, I'd make sure you can still get access to something that will read and log temperature values from the motherboard, otherwise it may lose a lot of usefulness.
I already looked up the source for MPrime, and dug out my copy of Operating Systems Design and Implementation (third edition), now it's just a question of finding some free time to actually sit down and see if I can get this done. Very busy right now, what with work, getting my build straightened out, and planning/paying for my wedding. If I can find the time to work on this, I definitely will, and if I get something usable put together I'll find someplace to post it and and dump a link on here.

Justintoxicated
08-19-2008, 04:50 PM
Why not use Intel CPU Burn? I seem to get more consistant temps. But I'm not sure if there is a linux port?

Martinm210
08-19-2008, 09:06 PM
"out of town" Oh hell no, you still got some testing to do:) You done yet:rofl::rofl:

Great work and a lot of thought Martin, I learn everytime.

andyc

Hehe. No not done, just getting started..:ROTF:

I'll give the all night run a try, I think a big part of my problem has been the 1C resolution stepping that's going on with this quad core and the 30min runs. Allowing it to run overnight should level that off pretty well. I havn't seen services take over prime 95 yet, but I'm sure that could happen, I'll just have to plot the results and watch for that. Maybe purposely disable some of those things to prevent that.

We'll see, I'm just happy that I ran the all night test. It seems to explain some of the mystery I was seeing...:up: