No problem Diverge, our rigs are allot alike... :)
Sounds like you are having some good temps too. :up:
Make sure to post me up a screen shot when you get up and running.
Printable View
Having all 3 blocks in the 'low resistance' catagory, is helping my pump out, and temps I hope. :)
I bet an official EVGA 295 block will be made soon.
Go with the flow baby!! :yepp:
The earlier posts about using Arctic Silver Ceramique over MX-2 on GPU's has me thinking I better keep an eye on my temps. If MX-2 does get too thin on GPU's, so that my block to chip contact would suffer, my temps would begin to climb I would think?
For thoes that think MX-2 gets too thin, is it over time, or right away?
I'm gonna be using IC Diamond if I can get ahold of it. Was gonna order it from Petra's, but when I ordered it was OOS.
Thanks for the posts. :)
I probably am fine then. I was glad to hear your MX-2 report too.
You might enjoy this info...
Looking at this site:
http://extreme.pcgameshardware.de/gr...gtx-285-a.html
Post numbet 2, has this chart...
http://img84.imageshack.us/img84/665...0clocksdj3.jpg
Looks like if we can coax our shaders into running at 1566, our core will go up to 771, and we should have our memory at 1323 or higher...
Getting the shaders to run error free at 1620 would be super, and cores up to 792. :p:
I think I have verified my 280's max settings for all three now, core, shaders, and memory...
I already knew my shader limit for sure, and my core, but wanted to spend more time finding my memory's max setting. As it turnes out, I already had it.
I had been running my memory at 1296 error free.
I tried setting the memory to 1350, but that makes ATI tool ring the artifact bell non stop. I did give 3DMark06 a run at that speed, but it caused my system to re-boot. 1350 is out for sure on my card.
I thought 1332 was going to work for me, but in the name of stability, I think I am going to have to give it up... 1332 did pass both 3DMark06 and 3DMark Vantage just fine. ATI tool however, when left to run an extended amount of time, would detect an artifact and reset the timer. It would run 5 to 10 minutes before hitting 1 error, and resetting the timer.
I then set my memory to 1323, and ATI Tool ran error free for over 30 minutes. I really thought 1323 would be my new victory number. I shut down ATI Tool, and fired the Cryostasis Tech Demo up, it ran 1/2 way through, then re-booted my system. Man-O!! :(
I then set my memory back to 1296, and re-ran the Cryostasis Tech Demo numerous times, and not one single re-boot. I have also tested FurMark, 3DMark06, 3Dmark Vantage, UT3, Dead Space, Starfleet Command III, and GRID. All programs passed with flying colors. :)
The EVGA Voltage Tuner does not allow for a higher memory voltage, so I am not expecting any gains here. I would love to be surprised on that. Is there any reason to think memory might run faster too, by adding more volts to the core?
I feel pretty darn firm, that my 280's max settings before the voltage tool is: Core=756, Shaders=1512, and Memory=1296.
If think I will be able to tell with 100% certainty, if this tool helps my 280 out any. :)
I can say this... Using the info from this page:
http://www.ocxtreme.org/forumenus/showthread.php?t=4427
I figured out that my 280 is in the fourth bus, 03 I2C.
My 4 registers are set like this, using BIOS v62.00.0E.00.00 in my 280:
Register 15 - 1.1875v (Performance 3D Mode)
Register 16 - 1.0625v (Low Power 3D Mode.)
Register 17 - 1.125v (Unused voltage entry.)
Register 18 - 1.0375v (2D Mode.)
Using RivaTuner's monitoring tools:
When my core is at 300, my voltage reads 1.04v, looks like register 18.
When my core is at 399, my voltage reads 1.06v, looks like register 16.
When my core is at 756, my voltage reads 1.19v, looks like register 15.
http://www.evga.com/forums/tm.asp?m=...55974;
My question was: Is my register 17 for shader voltage, or dosen't it work that way?
Posted by Unwinder:
"No, it is not a shader voltage. In serial VID encoding mode (this mode is used on both 4800 and GTX200 series) VT1165 VRM allows generating just one of these 4 voltages defined with registers 15-18 by 2-bit VID defined by state of VRM input pins at the moment on time. Other words, it doesn't allow generating multiple voltages simultaneously and there is no such thing as shader voltage. VRM supports 4 different voltage entries in serial VID mode and your GPU needs only 3 of them (one for 2D, one for Low Power 3D and one for performance 3D mode. So that is just an unused voltage entry".
On my system, register 17 is just an unused voltage entry...
He further confirmed for me that the voltage value: "applies only to GPU core (and stream processors aka shaders, as they are part of GPU). Memory chips are driven by differnt VRM (not software controlable)".
I think that indeed, 1296 is the end of the story for my 280's memory. ;)
I think that chart above is wrong. Cause I was able to get 756 on core with 1512 on shaders... you were too.
Bumping my voltage up to about 1.30v on my gpu, let me overclock it to 783/1566. But my problem is vrm temps start to get up there, and that bothers me.
Either my cooling is a one off, or the dtek unsinks arent that good for cooling vrm's. Looking at the temps you got in everest, I wish i got a full cover block for my 280.
Thanks for the info. I will keep an eye on my VRM's when I start to add more voltage. Rest assured, I will give a VRM temp report. ;)
You may be correct about the chart...
I did get my core up to 756, with my shaders at 1512. Maybe the numbers you can actually lock into are accurate, but as far as how they appear next to each other is off?
I knew that when my Shaders are set to 1296, my system runs rock solid for me, so I went for 1323 on memory. At 1323, it consistantly makes my system re-boot only when running the Cryostasis Tech Demo. All other programs I have tested run fine. That would be ATI tool for over 30 minutes, not 1 artifact ding... Vantage, 3DMark06, Crysis benchmark tool, Crysis the game, UT3, and Dead Space have all passed with flying colors now...
Next I went for 1332 on memory speed. ATI Tool did detect an artifact on me a few days ago when running my memory ar 1332, but now runs error free? I have no explanation.
http://img89.imageshack.us/img89/716/1332sh5.jpg
If any of you 280 guy's set your memory faster than 1323, give that Cryostasis Tech Demo a run and report how it goes.
I think the program dosent like OC?
With my Q6600 @ 3.8GHz, and Single 280 set to: Core=756, Shaders=1512, and Memory=1350, ATI Tool when set to detect artifacts, would give me just 1 ding, about every 2 to 3 minutes, and the artifact timer would reset.
In the name of science, I decided to plow foward anyway...
Vantage ran fine and gave me a new record: http://service.futuremark.com/compare?3dmv=742410
http://img149.imageshack.us/img149/8...0p15911hp0.jpg
The Crysis GPU Benchmark also ran fine...
1/29/2009 10:16:04 PM - Vista 64
Beginning Run #1 on Map-island, Demo-benchmark_gpu
DX10 1900x1200, AA=No AA, Vsync=Disabled, 64 bit test, FullScreen
Demo Loops=3, Time Of Day= 9
Global Game Quality: High
================================================== ============
TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s)
!TimeDemo Run 0 Finished.
Play Time: 50.22s, Average FPS: 39.82
Min FPS: 30.39 at frame 1945, Max FPS: 53.59 at frame 110
Average Tri/Sec: -37496732, Tri/Frame: -941616
Recorded/Played Tris ratio: -0.97
!TimeDemo Run 1 Finished.
Play Time: 47.45s, Average FPS: 42.15
Min FPS: 30.39 at frame 1945, Max FPS: 61.18 at frame 90
Average Tri/Sec: -39176344, Tri/Frame: -929373
Recorded/Played Tris ratio: -0.99
!TimeDemo Run 2 Finished.
Play Time: 47.51s, Average FPS: 42.09
Min FPS: 30.39 at frame 1945, Max FPS: 61.18 at frame 90
Average Tri/Sec: -39127436, Tri/Frame: -929564
Recorded/Played Tris ratio: -0.99
TimeDemo Play Ended, (3 Runs Performed)
================================================== ============
Completed All Tests
<><><><><><><><><><><><><>>--SUMMARY--<<><><><><><><><><><><><><>
1/29/2009 10:16:04 PM - Vista 64
Run #1- DX10 1900x1200 AA=No AA, 64 bit test, Quality: High ~~ Overall Average FPS: 42.12
FurMark also didn't have any issues with the OC...
http://img147.imageshack.us/img147/3...furmarkdc5.jpg
I visually did not see ant artifacts. If ATI Tool would not give me 1 ding every few minutes, I don't even think I would know I had an issue...
If you had 1 ding every few minutes running ATI Tool, but Crysis, Vantage, and FurMark all seem to have no artifacts to the human eye, would you still consider it a valid memory setting to select?
FYI - If your running a single 280 or 285, and you want to see other's Vantage scores, look here:
http://www.evga.com/forums/tm.asp?m=...55983;
We are recording non-sli scores. :)
Lots of good info here, Talonman - and I appreciate the comparison thread you're doing at EVGA Forums.
You know ... on the MX-2 questions. I've been using it exclusively, but it seems to me the latest stuff I have is quite a bit "sloppier/thinner." Not trying to fuel the myth, just saying I might give something else a try.
Thanks sazza, next time I will try another TIM on my GPU, just to see how my temps change...
Well, my system answered my question about if I should consider 1350 as a valid GPU Memory selection for me...
Vantage has successfully run numerous times error free at 1350, but in a run I just did, caused my system to re-boot. :down:
1350 is officially back out of the hunt for me.
I set my 280's memory is back down to 1332, and started ATI tool again, scanning for artifacts. 37 minutes in I heard 1 ding. Rats! :)
280's memory went to 1323, and started scanning for artifacts again. 58 minutes in I heard 2 dings. Wow. The timer reset, and needless to say, I hated to see that...
Back to memory 1296, and re-testing.
Note: When ATI Tool detected an artifact running 1332, and 1323, I was surfing the forums. Do you think that could make ATI Tool report a false artifact hit?
Update: 1296 still running error free after 100 minutes.
http://img504.imageshack.us/img504/4...minutesnd5.jpg
Tried 1323 again... It ran 176 minutes error free.
http://img128.imageshack.us/img128/6...minuteswe6.jpg
Talonman, I have a GFX2 DTek Block and a Unisink with a fan blowing over it. I have raised my clocks up and currently have it folding. I wanna fold for a couple of days first and then ease into the hard stuff. The temps are extremely low for folding on a GPU2 client with this water setup. It's currently folding at 36C, and before would idle at 41C and fold at 61C. I do not know if it's stable yet. I just applied those settings. The cleint has not EUE's and has completed steps so it looks like it may hold for folding, but I need to run more intensive tests.
My OC currently is at 720/1512 Unlinked/1200. All temps look good in GPU-Z. The VDDC Temps range from 56C to 69C...there is one VDDC Slave that's on the hot side.
I'm really amazed with this water loop so far. I could never have achieved those clocks at all on stock cooling. There is just no way. The shaders was the one that was holding me back before, but temps were also an issue. I got into a thermal runaway condition where the stock cooling just could not keep up once I passed a certain threshold. The stock cooling wasn't bad, and gave me great termps, but for OC's like this it takes water. I used the Innovation Cooling (IC) Diamond for my TIM.
I'm very impressed with this setup. I'll report more on the stability when I geta chance to really test it properly.
I beat ya out by 18 points :p:
http://service.futuremark.com/compare?3dmv=742758
If I turned up my cpu back to 3.8MHz, I might break 16k... but don't wanna risk my PWMs blowing up again lol
Attached is my results with a water cooled 280. I ran it on a 30min stability test and the highest temp was 38-39C as reported by nvidia monitor.
If you want loop or waterblock details just let me know :up:
Thought I might add my current settings, since this seems to be the GTX 280 thread 'round these parts.
I bought a Zotac GTX280 AMP! stock at 700/ 1400/ 2300
Right now I have 'er stable at 750/ 1525/ 2430, testing Furmark windowed 4x MSAA, 1280x1024, Xtreme burn. I let it run an hour and up the clocks.
The block is MCW60 rev.2 plus unisink:
http://i2.photobucket.com/albums/y44...0/DSCN1777.jpg
756/ 1566/ 2448 and climbing...
edit: 2520 mem is out-- insta-artifact, hard lock, the business. So, true to my normal operative procedure, I dialed it back two step and am retesting at 2448 mem (1 hour stable so far) with 1620 shaders.
I suspect the GPU-only block will benefit the shaders/ GPU more than the memory, especially considering the ridiculously low airflow in my box. Temps topping at 55c with linpack max running with Furmark and Folding. Without linpack and folding I'm pretty steady at 51-52c.
edit2: Geez. 6 FPS improvement when raising the shader from 1566 to 1620. I heard these were shader bottle necked, but didn't realize how much!
I dialed it back a bit on the core for every day use but yea it could probably go though future mark fine. I played games for a few hours with those settings.
I'm talking about Furmark . It's a handy little heat generator that kills any and all cooling systems with the biggest blast of heat possible. As a worse-case scenario-type tool it's great for finding max stable OC (Futuremark ain't even close) :)
Here's a link: http://www.ozone3d.net/benchmarks/fur/
edit: what the heck. I just took the shader up to 1674. That makes 756/ 1674/ 1224 (2448). We'll see how she runs.
edit2: Let's see how long this holds up :rofl:
Hm. GPU tops at 756 (artifacting at 777 and 771), but the shader's still going strong at 1674. I might have something special here...
I did not run mine as long as you but here is the results :D
I have the core and shader locked together but I also noticed that it likes 756 although I have switched it to 800 once but I can not remember if it actually set it to that.
Cool, man. Are you running the Xtreme Burning Mode? Your screen looks different than mine. A quick note: most of my artifacts happen after 45 min (2700 seconds). You might want to run it a little longer. In my experience, when you get past that point artifacts from changes show up a whole lot quicker :up:
Since I started testing my final video OCs with Furmark for at least an hour, I've not had a single GPU OC related crash.
Oh, I was just ruing stability test, that probably explains the fps difference despite having twice the MSAA :cool:
One thing I do know is that my best cooling out come will always be limited by the fact that the rad that cools the gpu is after the rad that cools my oced cpu and thus the air cooling that rad is already warmer. The difference between the coolant temps of the loops is generally <5F