Ya it did not seem to work with my Gigabyte card- though funny thing I was able to OC the fan?! So I used RivaTuner, took me a bit to get used to it but it does cover everything, I must say Precision is very simple
Ya it did not seem to work with my Gigabyte card- though funny thing I was able to OC the fan?! So I used RivaTuner, took me a bit to get used to it but it does cover everything, I must say Precision is very simple
I've been doing a little research and came to the conclusion that all 200 series should work at 685/1477/1160 and most should work at 702/1512/1188.
Anyone willing to test that statement and validate it for GPUGrid? I know it works on mine! (especially need validation on 250's and 295's as those are the extremes)
If we can prove this, that's a really good note for our team as everyone at stock willing could safely get a 19-22% boost in output.
Also, need updates on the RAC of OCs as the points have changed...
I'm gonna guess that Precision will work with any reference card cause they really are all the same, just different stickers and tighter bins.
dam, i may have spoken too soon. 700/1500/1180 might be stable just for GPUgrid, but it's not stable for 3dmark vantage. now i'm having doubts as to which application is more demanding.
i think a good compromise would be having high shader clocks for GPUgrid, while only having moderate core and memory clocks to satisfy everything else. Otis11, you pointed out earlier in the thread that it's probably best in terms of stability and clocking to keep the shader and core linked, so hopefully 685/1515/1400 would work for all applications (moderate core OC, high shaders, stock memory). i'm testing that out right now.
edit: it's also a pain in the arse every time 3dmark vantage gets errors, because then the GPU downclocks and i can't seem to bring it back up to even stock speeds unless i reboot the computer
1400 on the mem?!?!?! Are you kidding? Is that will double data rates or before you consider that? The highest I've gotten mine is 1188... haven't pushed it, but 1400 seems really high.
And my clock is rock solid on any test i've done... pretty sure that included furmark.
If you have time, would you mind finding out what part causes it to be unstable? Would 685/1477/1160 work on it?
oops! sorry typo, memory was stock 1050, not 1400 lol.
yeah i'm trying to find the edge of stability for all applications right now. i just tested it with 685/1500/1050 and 3dmark vantage still errored at 1440x900 resolution, so it's either the high shader clocks or the moderate core clock since memory was stock. 700/1500/1180 passed vantage at standard resolution though, but with higher res, it'll fail. i have the zotac AMP edition, so i kinda expected it to clock a little higher than reference, but i guess not. if i have enough data test points, i can make a small chart. i will keep editing this post with my own personal results.
GTX260 (216)
700/1500/1180: GPUgrid *stable so far* .... 3dm vantage- stable 1280x1024, unstable 1440x900
685/1500/1050: GPUgrid *stable so far* .... 3dm vantage- stable 1280x1024, unstable 1440x900
680/1485/1050: GPUgrid *stable* ............. 3dm vantage- stable 1280x1024, stable 1440x900
680/1500/1050: GPUgrid *stable* ............. 3dm vantage- stable 1280x1024, unstable 1440x900
680/1490/1050: GPUgrid *stable* ............. 3dm vantage- stable 1280x1024, stable 1440x900
700/1490/1050: GPUgrid *stable* ............. 3dm vantage- stable 1280x1024, stable 1440x900
715/1490/1050: GPUgrid *stable* ............. 3dm vantage- stable 1280x1024, unstable 1440x900
715/1485/1050: GPUgrid *stable* ............. 3dm vantage- stable 1280x1024, stable 1440x900
the last line is my final stable clock where it's crunching at.
So what happens if you keep the core at 680 and push the shaders up to 1500? will that pass?
If not we know where the error is. Also might try 1485 shader and 702 core... if it passes both of those, it should work at 702/1500, if not, you know which one is the error.
What I found was that both evga and xfx 260's will fold/grid @ 1620 IF the ambient is below 22deg but because that temp is on the cusp of normal house temp it is best to drop 1 step to 1584 (this with 100% fan and core @ 590 (586 real) and mem @ 999) then again I don't game ...all settings/readings from precision
My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!
Ok, well your shaders only run at 1476(1468-1494) or 1512(1495-1532)... you can change the slider to whatever you want, but it really runs only at those values, so bumping by just a few notches won't make a difference.
OC, how'd you get them so high? bumped the volts up something serious?
i have a gtx295 co-op running at 740mhz core and whatever the corresponding linked shader is. the fan is on auto, which is around 50%, and temps are around 63C, and on stock volts. what is the max you guys are getting with these 295's? ironically, this actually worries me because it's hard to believe i can run these high clocks stable without touching the voltage. i have not tested it higher, but i think it can even OC higher. what is the default voltage? i'm just afraid as i increase the clocks, the card might be overvolting itself.
OK, so by my guess you're at 1584 for your shaders... which is kinda high on stock voltage, but not unheard of.
Mabyboi did manage to pull it up to 1600 on shaders before getting majorly unstable... What are you using for your tool? The first page has some good ones if you need them.
Edit: and I should really update this thread - anyone who would post the info I'm going to clear the point values and start again. Any and all info welcome!
well the big difference is that he watercooled his. this one is bare stock cooling with no tweaks at all, fan only runs at 50% on auto, only tool i use is evga precision to OC the card. usually i'm worried with my CPU or GPU can't OC for crap, but in this case, i'm a little skeptical it can OC so high. when i get the chance, i'll trying maxing it out to see how high it can really go.
My EVGA 295 (vanilla) is rock solid stable at 1518.
If I bump to 1138 with EVGA voltage tuner I am 1656 stable.
Some GPUGrid WUs are less stressful than others but if you let it run a few days you will probably hit each different type and claim stable
I always double check the WUs that fail against the results from other machines to make sure it is not a bogus WU but those have been quite rare lately.
Sorry if this is OT but ive noticed that the max gpu usage i see thru boinc is ~73% it hovers consistently between 67-73 , is there a way to increase this to increase productivity. I am reporting gpu usage numbers from precision.
EDIT - scratch the above i just noticed that one gpu 2 is sitting at 50% ~ . Im guessing it varies from WU to WU.
Last edited by Zloyd; 07-05-2010 at 08:56 AM.
giving 1500 shader a go.
Shaders on the 295 move in "straps"
Setting 1500 will really move it to 1512 (I typoed in my last post)
If you get errors there move down to 1476 and see if that is better for your card.
Good luck, let us know how it works out
I have the setting still with core and shaders linked and Voltage already set at 1.138 , I know I am going to use that since I do want to see how much I can push out of this , temps are going to be a limiting factor ultimately.
I think with lower volts and unlinked core (1500 shader 600~core) I am getting errors so I thought wth and bumped volts and kept them linked for now.
Hopefully I have the patience to run like this for a couple of days before trying the next step.
Core doesn't help in GPU crunching so you can unlink and save the power and heat, did I post that using an extra fan on the backside of the card helps tons???
So you mean a fan on the outside to pull air out ? makes perfect sense it puts the gpu heatsink in a push pull config
I have at other times benched with various fans strapped to it :P , I am also waiting on a shipment of 2 case fans which will pull air out from above the gpu (side panel fans the one that will do the work is a 80mm silverstone that can do a 80 cfm if i need it to the other is a 120 mm thin scythe nothing bigger will fit atm)
I just checked out how fast your 295 completes tasks and if I am correct it shows half the card at a time , its outscoring the 285 (half a card) you have, guess XP has a huge impact on performance , they should really optimise it more for win7. Oddly the averages dont reflect the superiority of the 295 system > 285 , I am missing something obvious
I cant go back to xp now I use this rig all the time (might try a dual boot so before I sleep or go out for my classes I make it crunch as efficiently as possibly) im sure they are working on making it as efficient on 7.
I will take your advice and drop the core I just feel with my card if I drop the core it will error out , I have not tried dropping the core since increasing the volts .
Silly me it seems I HAVE dropped the core a tad bit :P it is at 670c/1512s/1100m (precision settings) was 700c earlier.
The next shader strap that is worth trying for is 1600 , not sure if it can do that with current volts.
Could you suggest how far I can drop core for 1512 and 1600 shader settings. I was thinking 600 core for 1500 and 670or700 for 1600 though pulling off 1600s on this card seems unlikely at this point in time.
Edit - gonna try out 700c/1620s for a few hours and see how it goes.
Last edited by Zloyd; 07-08-2010 at 02:38 AM.
Actually I have my fan blowing against the side at ~60 degree angle from the pcie plugs pointed towards card's exhaust vents (typically the back of the case but I'm not running in a case which helps keep temps down).
I move my cards around to much to have any of the stats make much sense.
The current difference you are seeing between the latest runtimes on my 295 and 285 is that the 285 is downclocked during the day because it is upstairs in my study (I'm not spending extra money on more AC than I really need to) and the 295 is in the basement which is cool enough so no AC used there. I just moved the 295 so am running it at 1476 which is 1 strap below my stock volts max (1512).
I'm thinking I might move the 285 onto that same board, breakdown the 480 and re TIM it all with IDC to see if that helps. Anyone know if leaving the shroud off entirely helps?
The optimization actually needs to happen at the driver level between NVidia and MS because the driver mode changes that MS made at the OS level, GPUGrid can't do anything about it.
Because these cards are used for crunching and daily driving (I'm not a gamer) I use Precision to underclock the core as much as it will let me (just pull the slider as low as it will go) and I have never had a problem with stability because of it.
Upping the core freq will only create more heat which in turns brings the max OC of your shaders down ... dont do it
Last edited by Snow Crash; 07-08-2010 at 08:21 AM.
I seem to crash if I dont have the core and shader somewhat in each others vicinity , this is probably because of the straps mentioned in the first page of this thread.
Damn you have some insane cards for someone who doesn't game though I should get used to that this being GPUGRID :P
I too have played with fans blowing on the side of the card it helps all right. But the fan i have now for that is too darn noisy , I sleep in the same room and when I used to run that fan my ears were buzzing so I had to kinda stop that. Im hoping to get my fans in a couple of weeks I dont think it will help a LOT because my ambient temp is always close to 30 , card runs at high 80's and low 90's depending on the day. 295 temps were pretty exciting this summer :P
Bookmarks