A 275 at 1656 is real nice ... you'll get some good point for sure
A 275 at 1656 is real nice ... you'll get some good point for sure
1656 - on stock volts?!? Dude that's pretty sweet!
And do we have a ballpark for overclocking the 4xx series?
Sorry I had not posted ...
480s get interesting as each card has been hand set for their own default v at the factory so we all have a different base to start from
My base is 1038 (ouch, I've seen people bragging about much lower but you get ewhat you get)
Anyway ... at 1138 mv I can go up to 1736
My sweet spot is 1050 mv for 1595 which I think is
Is it possible to get help on a good and stable OC for two GTX 465 and yes they are wc, They are both MSI N465GTX-M2D1G . So far they have been crunching since I got them xD I feel like I need more crunching powah !!!, I barerly get to play on them because of college ( 9 hours ) .
Thank you
Galaxy GTX 480
803/1606/1900 - Plan.B - last 7 day average is 50,152 ppd
Volts: Stock
Cooling: watercooling, gpu only
Temps: 107 - 109 F loaded
This is a 7 day 24/7 stable loaded oc. I've just started playing with this card, but it overclocks nicely so far. Watercooling the gpu, with the rest of the card naked, dropped the gpu and pcb temps by over 20C.
Distributed Computing: Making the world a better place, one work-unit at a time.
http://www.worldcommunitygrid.org/index.jsp
Swan_sync on?
LC Nab: A pair of 465s ... I bet you will be able to get some nice points with those. What tools and experience do you have overclocking?
Plan B. - Nice .. we like stable better than fast . I'm with lkiller123 in thinking you are not using SWAN_SYNC=0. We realize this uses a full CPU thread in conjuntion with your GPU and some people don't want to do that. Either way ... nice card!
roki977: Good score ... let it run for a week straight out and your average PPD will climb higher.
That is all you need. Unlock voltage (if your gpu use Voltera regulation u ll be able to use volt. reg. ) controll and try ocing. What card u use?
Snow Crash, I would love to let my 470 run for a week straight but i own only one pc for everything..
Distributed Computing: Making the world a better place, one work-unit at a time.
http://www.worldcommunitygrid.org/index.jsp
Update:
My 275 errored on a KASHIF_HIVPR wu at some point monday. I backed my clocks down to 485/1652/1152 monday evening, just a 4mhz drop on the shadders, and it has successfully completed another KASHIF_HIVPR wu and been going strong since. I'm still in the 1656 strap so it's all good
Nymph - Xeon 5675 @ 4.1 w/ x2 680
Ikaros - i7 920 w/ 275 - Build In Progress
Long live LGA1366
I think you can unlock shaders on every 465 regardless of mempry chips and only shaders are important for gpugrid.
Under the water you can use 1.087v with no problems and you shoud get at least 1500mhz or more if you are lucky
Fermi loves cold. My 470 under ln2 with 1.15v goes over 1900mhz easy(1730 air), with stock mem. memory freq even more. If you overclocking for gpugrid leave mem. on stock.
Galaxy GTX 480
809/1618/1954 - Plan.B
Volts: Stock
Cooling: watercooling, gpu only
Temps: 107 - 109 F loaded
Swan_sync=0
Win Vista 64
Daily average over the last week is down to 42.5k
???
GPU usage as per EVGA Precision is at 65-66%
WCG is running, 7 wcg threads and 1 gpug
Last edited by Plan.B; 10-03-2010 at 03:18 PM.
Distributed Computing: Making the world a better place, one work-unit at a time.
http://www.worldcommunitygrid.org/index.jsp
disabled hyper threading to increase ppd a bit
Main: i7-930 @ 2.8GHz HT on; 1x GIGABYTE GTX 660 Ti OC 100% GPUGrid
2nd: i7-920 @ 2.66GHz HT off; 1x EVGA GTX 650 Ti SSC 100% GPUGrid
3rd: i7-3770k @ 3.6GHz HT on, 3 threads GPUGrid CPU; 2x GIGABYTE GTX 660 Ti OC 100% GPUGrid
Part-time: FX-4100 @ 3.6GHz, 2 threads GPUGrid CPU; 1x EVGA GTX 650 100% GPUGrid
even without disabling HT if you tell Boinc to use one less thread than you really have then GPUGrid will use it and you will get much better scores.
btw, cpu is i7 960 (ES) at 4.0 GHz. RAC is climbing, up to 48k now. I probably haven't let it run long enough on the new settings to get a good daily average.
Distributed Computing: Making the world a better place, one work-unit at a time.
http://www.worldcommunitygrid.org/index.jsp
Hello everyone,
A couple of months ago I picked up a second EVGA GTX275 SC (648/1458/1188 clocks), primarily because I wanted to ditch SoftTH in favor of nVidia surround for my triple monitor setup. We finally got some cooler weather here, so the other day I fired up GPUGRID to see what the cards would produce. I ran them both between 8pm - 2pm for a couple of days (got too hot in the man-cave during the afternoons), then we had more cool weather on 10/2 and I've been running them 24hrs ever since. Looks like they're each averaging ~33k/day (while running 8 WCG threads concurrently), which is pretty much in line with my expectations .
But I'd like to see if I can get a little more out of them. Snow Crash suggested bumping the shader clocks a little, and posting in this thread if I need any assitance. Now in the past, I've briefly experimented with a few MILD overclocks on my cpu, but I've never tried anything on a GPU. In other words, I'm a complete n00b when it comes to this sort of thing.
I've done a little reading up on GPU overclocking; here's my current understanding of the process at a high-level:
Run GPU stress/stability tool (I have FurMark)
Run OC software (I have Rivatuner)
Unlink core/shader/memory clocks and increase MHz values in small increments
If artifacts/system lockups occur, back down to last stable clock value/restart system if necessary
Watch temps (keep below 80c?)
Is this roughly accurate?
Most of the OC guides I've come across seem to be oriented toward increasing gaming performance - they talk about stepping the core clocks first, then shaders, then memory. But in reading through this thread, I guess it's only the shaders that really matter for GPU computing. So before I try my hand at this, I have a couple of basic questions:
1. Is Rivatuner ok, or should I use EVGA precision? (I'm on 258.96 driver)
2. I'm assuming I should step the shaders according to the "Scale for G200b" table in the first post...correct?
3. I guess I leave the core clock alone, but do I need to bump the memory clock each time I step up the shader clock - i.e. do these need to be kept in sync?
4. After increasing shader MHz, how long should FurMark run before considering the frequency stable and moving to the next step?
5. Since I'm working with dual cards, should I OC one then the other, or both in tandem?
6. What kind of temps should I be concerned about for these cards? It's currently ~72F ambient and the cards are running 67C with GPU utilization at 65-67% running GPUGRID (which seems OK). I ran FurMark when I bought the second card and GPU utilization was 94% or more (I don't recall what the temps were)...what temps should I expect when running FurMark?
I'm very much a crawl/walk before run kind of guy, so I'm looking for a "mild" to maybe "medium" result (at least to begin with). Also, please let me know of any pitfalls I may encounter. Any guidance would be greatly appreciated!
EDIT: System specs -
ASUS P6T6 WS Revo
Intel i7 920 (stock clocks)
6GB OCZ3X1600LV6GK @1600MHz
EVGA GTX275 SC x2
Corsair H50
Corsair TX750
Regards,
1. I prefer EVGA precision - It has just worked better for me in the past. (But to be fair I had more experience when I used it)
2. Yes... And you can probably just jump the shaders to 1512. (1476 is you want to walk first )
3. I like to bump the memory also and keep the core close to what it should be if it were locked, but that's just me. Many here leave the mem at stock and drop the core below stock.
4. Furmark - yeah I mean if you really want to. I have yet to see a card that can't do 1476, so I would recommend jumping there and testing, then going to 1512. Also, GPUGrid itself is a much better test of shaders than Furmark. I would just try it (disable network so you don't burn through your Queue though)
5. When I had two, I did both in tandem until I hit a problem, dropped both one step and bumped up each. (Turned out neither could get above 1512 without extra volts though... and one card couldn't overvolt)
6. I don't worry about Furmark temps...If it's below 70C on GPUGrid I'm happy. I even run up to 80s in the summer - but not for long stints. Only gets that hot during mid day. But then again I'm not very cautious about temps. I say it's going to be old in 3-4 years so I'll get rid of it before I'll ever kill it.
I believe that's everything, but I'll go through this more thoroughly later. Feel free to point out anything I overlooked.
(And get that 920 OCed! Vcor 1.275, VQPI 1.300 mult 21 bus 180+ easy - maybe even 200 for 4.2 Ghz. With your cooling you're golden )
WTF ... I wrote a big long post yesterday and was certain I put it up???
Odd how it looked and read so much like Otis so no real loss
I'm not married to FurMark by any means. I just got it because the most recent 275 came from EVGA's "B-Stock", and I wanted to stress the card to make sure everything was kosher with it. If I can do without it, GREAT.
So as far as testing goes, do I just start GPUGRID, step up shader clocks 1 notch and watch for errors in BOINC Manager? How long should it run before considering the OC stable? Do I need to complete the WUs and check for errors at the GPUGRID site before moving to the next step?
Sorry for all the questions, I'm sort of a nervous ninny about this. Maybe I'm trying to make it too complicated?
Hehe, I should've known better than to admit to having an i7 at stock around here. I was going to delve into CPU OCing using the dedicated cruncher I picked up last May (AMD 1055t), but the wife loaned it out to one of her (recently divorced) friends because "...her kids need their own computer..." I told her that Santa had better bring those kids their own for Christmas...
Anyway, thanks for your help!
Regards,
Bookmarks