Page 1 of 2 12 LastLast
Results 1 to 25 of 43

Thread: tips, tricks, and tidbits for GPU crunching

  1. #1
    Xtreme Cruncher
    Join Date
    Jul 2007
    Location
    @ the computer
    Posts
    2,510

    tips, tricks, and tidbits for GPU crunching

    hey guys, we were all n00bs at one point or another, and i believe there's a lot of little things here and there that new crunchers as well as long-time crunchers either overlook or didn't know about yet. here's a few things to keep in mind:

    • GPUgrid can only crunch with nvidia cards, ATI cards are being tested right now
    • windows XP will complete the WU faster than win7 or vista, hence higher PPD. this issue of slower times on vista/win7 is between MS and nvidia's drivers, so nothing we can really do
    • finishing a work unit (WU) in less than 2 days will give 25% point bonus, and less than 1 day will give 50% bonus
    • core and memory speed has negligible affect on the computation times, temps, and power. the number of shaders (stream processors) and its clocks will have a much higher impact on computation time
    • the CPU speed will also affect your completion time
    • generally, it's better to spread your GPU's between different computers than to try and fit as many as you can on one computer because they fight for resources (will provide link to my own system for reference)
    • when using multiple GPU's on a computer, you should disable SLI
    • 2gb system memory is about the max you'll need, 4gb for overkill (i ran 6 GPUs and 7 WCG work units with 2gb, and it was fine)
    • pci-e bandwidth shouldn't matter (although can someone help me verify this?)
    • a cooler running card can save up to 60watts of power at load (will link to my own testing). find a good balance between cooling and fan noise
    • using SWAN_SYNC in the environmental variables will give fermi cards a good 10-20% performance boost. this will dedicated a full CPU thread to the GPU. on any other cards, it's detrimental to performance
    • GPU's at crunching load generally will draw about 60% of the rated TDP
    • the 200 series cards are the most optimized right now. a gt240 is best in terms of PPD/watts and PPD/cost ratio. a gtx295 is currently the highest performing card, followed by a gtx480 just a hair behind. gtx260 and gtx275 is a good middle ground for PPD, cost, and power consumption.
    • gts250's are horrible crunchers for the amount of shaders and clocks that it runs at. not sure what it is with these cards, but i have not seen a single one with good results
    • you can use different cards from the 8/9, 200, and 400 series on the same computer, as long as you use the latest nvidia drivers
    • for dedicated crunchers, turn off your screen saver and disable the "turn off monitor after..." settings because leaving these features on might cause certain GPU's to downclock to 2D mode. i'm not exactly sure why. just manually turn off your monitor
    • for headless crunchers, don't use the MS remote desktop that comes with windows. this will instantly error out your WU. use UltraVNC or RealVNC instead for remote access
    • example of a very high PPD card: gtx480 overclocked, CPU overclocked, win XP, SWAN_SYNC=0, hyperthreading disabled, no WCG


    please post in here to help me add on to this list, or if you have questions about any of those i listed.
    Last edited by WhiteFireDragon; 07-30-2010 at 11:56 AM.
    [SIGPIC][/SIGPIC]

  2. #2
    Xtreme Cruncher
    Join Date
    Jul 2007
    Location
    @ the computer
    Posts
    2,510
    been lagging with the update, but here they are.

    1) SWAN_SYNC:
    right click my computer -> properties -> advance system propterties -> advance tab -> click environment variables -> under system vairables, click new, enter this:



    2) below, i also attached the cc_config file. by default, it will report your complete WU's right away. also useful if you want to add more things, a little easier than making your own cc_config. unhide all your hidden folders, then paste it in the BOINC folder. for vista/win7, the directory is C:\ProgramData\BOINC, and for XP, it's located in C:\Documents and Settings\All Users\Application Data\BOINC. then in the BOINC manager, go to "advance" tab, and select "read config file". check under the "Messages" tab to make sure the manager read it correctly.

    3) i also attached a PPD calculator spreadsheet. sometimes, it's not ideal to wait a few weeks for your machine and PPD to stabilize, so this spreadsheet will show an accurate projection of a predicted PPD, as long as all variables are kept the same. the instructions are written inside, but i think it's pretty self explainatory.
    Attached Files Attached Files
    Last edited by WhiteFireDragon; 08-08-2010 at 07:18 PM.
    [SIGPIC][/SIGPIC]

  3. #3
    Xtreme Cruncher
    Join Date
    Dec 2008
    Location
    Los Angeles/Hong Kong
    Posts
    3,058
    Different GPUs in the same computer will still work, right?
    Team XS: xs4s.org



  4. #4
    Xtreme Cruncher
    Join Date
    Jul 2007
    Location
    @ the computer
    Posts
    2,510
    Quote Originally Posted by lkiller123 View Post
    Different GPUs in the same computer will still work, right?
    yup, you can use different GPU's, even across different series on the same computer, as long as you use 257.96 WHQL or later drivers. they just released this type of drivers to let fermi cards be used with 200 and 8/9 series about a month ago i think.

    i'll also add this to the first post
    [SIGPIC][/SIGPIC]

  5. #5
    Xtreme Cruncher
    Join Date
    Oct 2007
    Posts
    1,638
    Quote Originally Posted by WhiteFireDragon View Post
    [*]generally, it's better to spread your GPU's between different computers than to try and fit as many as you can on one computer because they fight for resources (will provide link to my own system for reference)

    Question about this; how much do multiple GPU's affect performance? And what resources do they share? If all the GPU's have enough free CPU threads and available memory then what causes the sharing conflicts?

    I see SC's 3 x 285 machine averaging ~115,000ppd where it looks like a single 285 would do ~40,000. I also see some of the multiple 295 scaling much worse. The top host CNT-IQE's 4 295 Linux box only averages around ~210,000ppd; the next best comparable host I can see is the 2nd best linux 2 x 275 machine averaging ~70,000ppd. If those 295's performed that well that 4 x 295 box ought to do about 280,000ppd.

    Anyone with any experience with multiple GPU's in multiple host vs single host loaded up with GPU's?
    XTREMESupercomputer: Phase 2
    Live up to your name - November 1 - 8
    Crunch with us, the XS WCG team

  6. #6
    Xtreme Enthusiast
    Join Date
    Jan 2003
    Location
    UT
    Posts
    590
    Quote Originally Posted by WhiteFireDragon View Post
    [*]a cooler running card can save up to 60watts of power at load (will link to my own testing).
    Have you tested this already or are you testing this right now?
    Intel Core I7 3930K 4.4Ghz | Asus Radeon HD7870 | 8 GB Ram | Win 7 Ultimate x64 | Lenovo L220x Monitor | Logitech Z-5500 5.1 Speakers

  7. #7
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    Quote Originally Posted by trn View Post
    Question about this; how much do multiple GPU's affect performance? And what resources do they share? If all the GPU's have enough free CPU threads and available memory then what causes the sharing conflicts?

    I see SC's 3 x 285 machine averaging ~115,000ppd
    That machine is really 1 GTX 285 and 1 GTX 295 (old style 2 card). GPUGrid reports the total number of GPU processors based on the card in the first slot. Since it's current setup on 7/10 it is averaging 122,592.
    You can see what is really on a machine by looking at any task that has been returned. You can also see which of the cards processed the WU...

    http://www.gpugrid.net/result.php?resultid=2747488
    # Using device 0
    # There are 3 devices supporting CUDA
    # Device 0: "GeForce GTX 285"
    # Clock rate: 1.58 GHz
    # Total amount of global memory: 1073414144 bytes
    # Number of multiprocessors: 30
    # Number of cores: 240
    # Device 1: "GeForce GTX 295"
    # Clock rate: 1.48 GHz
    # Total amount of global memory: 939327488 bytes
    # Number of multiprocessors: 30
    # Number of cores: 240
    # Device 2: "GeForce GTX 295"
    # Clock rate: 1.48 GHz
    # Total amount of global memory: 939327488 bytes

    The biggest issue for me is not overhead (which I don't think is significant) but trying to combat the heat and keep your OC as high as single setup ... I can't quite do it yet.
    On the same machine but running single card the max shaders is 1656. The 285 can do this on stock volts (no hard mod done) and the 295 can do this with EVGA Voltage Tuner pushing v up to 1138 mv. With both cards plugged in VoltageTuner won't run because it knows it can't do anything with the 285 and my current PSU probably couldn't handle it anyway.

    where it looks like a single 285 would do ~40,000.
    Can you drop a link to the machine you are getting this from?
    If we can find single card machines running WinXP 32 bit, with 285 and another with a 295 at the same shaders we could make a fairly accurate comparison between single card and multi-card setup runtimes.

    I also see some of the multiple 295 scaling much worse. The top host CNT-IQE's 4 295 Linux box only averages around ~210,000ppd; the next best comparable host I can see is the 2nd best linux 2 x 275 machine averaging ~70,000ppd. If those 295's performed that well that 4 x 295 box ought to do about 280,000ppd.

    Anyone with any experience with multiple GPU's in multiple host vs single host loaded up with GPU's?

    When comparing scores make sure you look at what is really on the machine and also the OC (or not). Don't forget to take the OS into consideration. Comparing PPD is not accurate at all. There are errors, perhaps the machine was turned off, maybe it has not been running in the current config long enough to attain it's max PPD, I'm sure we could think of more.
    Last edited by Snow Crash; 07-30-2010 at 08:34 AM.

  8. #8
    Xtreme Cruncher
    Join Date
    Jul 2007
    Location
    @ the computer
    Posts
    2,510
    Quote Originally Posted by hedge View Post
    Have you tested this already or are you testing this right now?
    i already tested it like over a month ago when i had my gtx480. there's too many numbers and data, so i was lazy to post it up lol. i'll post it soon in a separate thread in nvidia section, and i'll link to it. 60w is the max on a very high power consuming card, so lower cards will have smaller power deltas

    Quote Originally Posted by trn View Post
    Question about this; how much do multiple GPU's affect performance? And what resources do they share? If all the GPU's have enough free CPU threads and available memory then what causes the sharing conflicts?

    Anyone with any experience with multiple GPU's in multiple host vs single host loaded up with GPU's?
    it looks like a pretty significant difference in performance for the higher performing cards, and also a lot of factors are involved. my single gtx295 was able to do roughly 85k PPD with the following settings: 1600mhz shader, winXP, 3.6ghz 8 thread with WCG loaded. the same card under almost the same settings with 2 other gtx295 (6 GPU's total), it's much less (although i don't have exact figures yet because my rig is still stabilizing in points). i'm guessing here's a few reasons why:

    1) i can't get it stable at the same clocks, and it's the same card. before, i could run it stable at 1584mhz shader, now that card is not completely stable at 1554mhz. i eliminated temps as a factor because even when i leave space for it to cool and turn the fan up to 100% (55C load), it's still not fully stable running at a lower clock than if it were by itself at a higher clock. no idea why, but i just know the WU is a little slower and WCG is not even running.

    2) the pci-e bandwidth could possibly be a factor, although this is pretty hard to test. i would imagine that 6 powerful GPU's crammed in x16/x8/x8 might play a role in saturating the lanes.

    3) my system is CPU bottle-necked. for each GPU to be the most efficient, it needs its own CPU core. once you turn on hyperthreading, then each CPU core is already shared between 2 threads, so a little efficiency is already lost here. i'm not running WCG at the moment while testing this, but to make it worst, it'll be another huge high to the CPU resources once i let WCG run also. worst case scenario is that a single CPU core with HT on will run 2 WCG units and 2 GPUgrid units, although more realistically the load will distribute more evenly between the other cores. the CPU switching back and forth between gpugrid and WCG is another hit. 4 cores doing 14 WU at the same time... pretty taxing to the CPU.

    for point 3, in fact it's so heavy on the CPU that a full WCG unit is automatically suspended while i was running the 6 GPUs. here's a few screenshots, you can see in the first one that one of the WU status is "waiting to run" while all the GPU's are running, and the second screenshot shows that once i suspend all the GPU's, then all 8 of the WCG units will run. you can do a simple test yourself. look at the GPU load while WCG is running, then pause WCG and you'll see the GPU load goes up about 10%

    [SIGPIC][/SIGPIC]

  9. #9
    Xtreme Cruncher
    Join Date
    Oct 2007
    Posts
    1,638
    Interesting Info, I knew SC and WFD would have some good data

    I got my information from GPUGrid list of top host. The host averages may report a bit low (115.5k vs the 122.5k for SC's top host) but it is a good comparison to other host.

    The top single 285 host is http://www.gpugrid.net/show_host_det...p?hostid=25361 with a RAC of 44,888.
    2nd best is http://www.gpugrid.net/show_host_det...p?hostid=30790 with a RAC of 41,452
    And then there is another one in the 41's and another in the 39's. I guesstimated 40,000 for a single 285 because the top host is probably overclocking that single card alot, and the next 2 host probably have some overclocking. All these Host are on XP.

    It is really too hard to make anything more than guesses by looking at other peoples stats; there are way to many factors to judge. But what the hell... i'll give it a shot anyways.

    I'm guessing that for higher powered GPU's CPU sharing begins to hurt performance the most if there are more physical cores than GPU's or CPU projects. As both SC and WFD mentioned multiple cards will never overclock as well as single cards no matter what kinda of cooling is available. This is probably akin to having 6 DIMM's vs 3 DIMM's when CPU overclocking; good luck achieving the same max CPU overclock with all the ram slots populated. Unfortunately there arent enough host with over 4 GPU's to find out more (CNT-IQE linux host with 4 295's and WFD's uber cruncher listed at 2.5 295's) I would guess that a host that had dedicated CPU resources would scale fairly well vs a host that had a single core for a single GPU free, so I would assume 4 x 480's with no CPU projects should scale within 1% - 3% of a single card with a dedicated thread. Now 8 GPU's with a 4c/8t CPU should be hurt by an extra 5% - 10% due to sharing 1 core with a HT. I don't think PCIE bandwidth could affect performance more than 1%, even in games slower PCIE bandwidths have minimal performance hits. I dunno, but thats how I would see the GPU scaling working out; anyone care to try it out? 7 x 480's in a machine? (and your own dedicated nuclear power plant )
    XTREMESupercomputer: Phase 2
    Live up to your name - November 1 - 8
    Crunch with us, the XS WCG team

  10. #10
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    Nice summary
    To support it I looked at the details of those two 285 hosts (thanks for the links) and compared the same types of WUs between them and my 285 ... roughly we are all within ~100-150 MHz of each other. When you look at the runtimes the difference is negligable (like 2-5 minutes max) so I'm not seeing an efficiency penalty for running multiple cards. I am not running as high as I know that card can go, but I am close enough that I don't think it would make more than a couple of minutes difference at best.

    WinXP or Linux, for fermi use SWAN_SYNC (not needed for Linux) and leave 1 extra* thread/core free. For fermi the extra thread I am referring to is not just the 1 used by SWAN_SYNC but 1 more on top of that. More free threads beyond that only get you maybe 1% higher GPU utilization. The extra thread/ core allows the OS and background processes to run without interupting crunching as crunching always runs at a lower prioity than the WUs.

    This brings us to the most important efficiency factor ... stability
    All it takes is 1 hour lost to failed WUs (equals 4.17% of your daily runtime) and you can forget about the 1-2% differentiators.
    Last edited by Snow Crash; 07-31-2010 at 02:48 AM.

  11. #11
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    Sorry for the double post but I'm trying to get back on track for this thread

    If you have an older card that is just about at the limit of the time bonus range you can force BOINC to report your results immediatley (reporting is different than uploading) by adding the following section to your cc_config.xml.

    <cc_config>
    <options>
    <report_results_immediately>1</report_results_immediately>
    </options>
    </cc_config>

    Most projects don't like you to report your results immediately as they would rather bunch a couple up from the same machine as the reporting process hits the database pretty heavy.

    If you don't already have this file you can make it in notepad (make sure it saves as xml and not txt) and put it in your data directory.
    If you don't know where your data directory is, you can see it in the first couple of message lines at BOINC Manager startup.


    How to set SWAN_SNYC:
    XP: Right click My Computer, Properties
    Click Advanced, and finally Environmental Variables.

    Vista or Win7: Right Computer, Properties
    Click Advanced System Settings (left side), then Environmental Variables.

    Under System Variables, Click New
    Name = SWAN_SYNC
    Value = 0
    Last edited by Snow Crash; 07-31-2010 at 06:28 AM.

  12. #12
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    Quote Originally Posted by WhiteFireDragon View Post
    [*]a cooler running card can save up to 60watts of power at load (will link to my own testing)
    [*]GPU's at crunching load generally will draw about 60% of the rated TDP
    Hi WFTD ... any update on these? Which cards, volts, freq did you test with?
    I need to pull the trigger on a CorsairAX and I had always assumed I would get the 1200 but ... maybe the 850 is a better fit.

  13. #13
    Xtreme Cruncher
    Join Date
    Jul 2007
    Location
    @ the computer
    Posts
    2,510
    sorry, i've been extremely lazy to post up things like data and results. 1200 is pretty overkill for crunching IMO. maybe it's good for benching where they pump a gazillion volts through the CPU or GPU, but WCG and gpugrid doesn't max out the TDP's. just to give you an estimate, i ran an i7 @ 3.78ghz 1.18v, and 3 OC'ed gtx295 stock volts on that one uber build, and it pulled only 830w with WCG and gpugrid loaded. was using a BFG LS-1200 PSU, which didn't even meet the 80plus efficiency certification. i doubt you'll be trying to power anything higher than 3x gtx295/480 at the same time, but then again, you sometimes overvolt your GPU's, which i never do. but also keep in mind that the 830w reading is AC current, PSU's are rated in DC current, so with a 80plus gold rating from the corsairAX series, you should be below 800w AC if using my hardware configuration. after converting to what the system needs in DC, it'll be well below 800w, so i think a 850w should be fine. is this for your gtx295 and 285 in the same rig?
    [SIGPIC][/SIGPIC]

  14. #14
    Xtreme Cruncher
    Join Date
    Jul 2007
    Location
    @ the computer
    Posts
    2,510
    ok i'm just going to double post. i think an AX 850w will be fine, but i'll post a few numbers and let you decide on it.

    i still didn't make the thread on 60w power saving yet, so for now you'll just have to take my word on it haha. this was done on a gtx480 when i had it for almost 2 weeks. i noticed that the machine would consume less power at night than in the day. i also took note of the power consumption right as it started crunching, and about 15 min into it. power consumption steadily rises, and it made me wonder why.

    when i did my test, i think i used furmark and forced the fan to run at only 30% until it burned up to 100C, took power consumption readings at intervals of a few seconds. to let it cool, i turned off furmark and put the fan at 100%, and took power consumption readings at intervals of a few seconds. i repeated this loop several times, and it was pretty consistent within a few watts. after subtracting the power consumption of the GPU fan at 30% vs 100%, the gtx480 at the coolest point had a difference of about 60w than at 100C.

    here's a few more power consumption figures. my TDP references came from this wiki page. here's some numbers for a gt240. load is either from WCG and/or GPUgrid

    with dedicated GPU present
    140w GPU + CPU load
    107w GPU load only
    112w CPU load only
    75w no load

    clarkdale IGP, no dedicated GPU
    102w CPU load
    65w no load

    here's my analysis of it. based on no load from both, the GPU adds 10w (75w - 65w) just being in there, idle. you can also see that the CPU takes 37w (102w - 65w, and 112w - 75w) when it's loaded, and these numbers look accurate because both numbers with the GPU and without the GPU agrees. based on the difference between the load with GPU+CPU and CPU only, the gt240 takes about 38w (140w - 102w). this is probably accurate within a few watts because of the resolution of the kill-a-watt, or the way BOINC puts load on both the projects. with a higher efficiency PSU, you may shave off a few watts, but if you're overclocking, it may also add a few watts. i measured a few more GPU's with this method in the exact system. here's the calculations:

    gt240 TDP: 69w
    system consumption: 140w
    GPU consumption: 38w, = 55% of of TDP

    gtx295 TPD: 289w
    system consumption: 272w
    GPU consumption: 170w, = 58.8% of TDP

    gtx275 TDP: 219w
    system consumption: 225w
    GPU consumption: 123w, = 56.1% of TDP

    edit- poppageek's gtx460 similar power consumption results:
    gtx460 TDP: 160w
    systems consumption: 294w
    GPU consumption: (294w system - 219w CPU load + 14w GPU idle) 89w, = 55.6% of TDP
    Last edited by WhiteFireDragon; 08-05-2010 at 06:10 PM.
    [SIGPIC][/SIGPIC]

  15. #15
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    Thanks, that seals the deal. Given that I pay the bills and still have to work for a living it really makes the best sense for me to go with the 850. Another thought as I made the decision is that while with 2 cards you can still leave some space for air by skipping a slot, I bet jamming 3 side by side gets pretty damn hot (I know you know you face this with your UBER). For now I'm likely going to drop it in the rig with my 480 and I may put the old 750 TX down stairs so that rig will have 2 PSUs which will feed it well. Think about picking up a used 275 just cuz
    I'll be experimenting soon to see just how quick I can really get the 480 turning WUs ... my goal is 1.5 hours each but I think that's pretty aggressive and am not sure I can actually get there.

  16. #16
    Xtreme Cruncher
    Join Date
    Jul 2007
    Location
    @ the computer
    Posts
    2,510
    ok i finally added a PPD calculator, the cc_config file, and instructions for SWAN_SYNC in the second post.
    [SIGPIC][/SIGPIC]

  17. #17
    Xtreme Cruncher
    Join Date
    Dec 2008
    Location
    Los Angeles/Hong Kong
    Posts
    3,058
    I suggest this thread to be stickied.
    Team XS: xs4s.org



  18. #18
    Xtreme Cruncher
    Join Date
    Oct 2006
    Location
    1000 Elysian Park Ave
    Posts
    2,669
    I'll give you guys a tip, if you hate that fat border in Windows Vista/7 go into personalize-windows color-advanced-border padding and turn it down to 1 or 0 like i do........
    i3-8100 | GTX 970
    Ryzen 5 1600 | RX 580
    Assume nothing; Question everything

  19. #19
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Croatia,Zagreb
    Posts
    140
    I have only one PC and at the same time working GPU and CPU grid. How much capacity the processor should be left for the GPU grid to work with good resaults? Now is 90% for CPU grid and rest for GPU.
    i7 950@4.0, GTX 470 700mhz gpu
    Core i7 950@4.0 Ghz, 1.180v, Venomus X
    Gigabyte x58 ud3r, 6gb Mushkin XP3, Kingston V+ "325" 64gb + WD Black 750gb
    GTX 470, TR Spitfire, 800/1900 + MSI GTX 470 Twin Frozr II 800/1900
    Corsair HX850

  20. #20
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    In BOINC Manager set 99% of processors and 100% CPU time.
    Now add the SWAN_SYNC environment variable = 0.
    This will work on 7 WCG WUs and 1 GPUGrid WU at full speed.

  21. #21
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Croatia,Zagreb
    Posts
    140
    Is 45787 in one day good score for gtx 470 in win7?
    Core i7 950@4.0 Ghz, 1.180v, Venomus X
    Gigabyte x58 ud3r, 6gb Mushkin XP3, Kingston V+ "325" 64gb + WD Black 750gb
    GTX 470, TR Spitfire, 800/1900 + MSI GTX 470 Twin Frozr II 800/1900
    Corsair HX850

  22. #22
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    That's about right for Win7.

  23. #23
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Croatia,Zagreb
    Posts
    140
    Thx.
    I put on 670MHz so I'll see what will be progress for tomorrow. Soon there will be another machine with Win XP and GTX 260, I hope.
    Core i7 950@4.0 Ghz, 1.180v, Venomus X
    Gigabyte x58 ud3r, 6gb Mushkin XP3, Kingston V+ "325" 64gb + WD Black 750gb
    GTX 470, TR Spitfire, 800/1900 + MSI GTX 470 Twin Frozr II 800/1900
    Corsair HX850

  24. #24
    Xtreme Cruncher
    Join Date
    Dec 2008
    Location
    Los Angeles/Hong Kong
    Posts
    3,058
    Quote Originally Posted by roki977 View Post
    Thx.
    I put on 670MHz so I'll see what will be progress for tomorrow. Soon there will be another machine with Win XP and GTX 260, I hope.
    Core clock does not affect the output. Try cranking up the shaders.
    Team XS: xs4s.org



  25. #25
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Croatia,Zagreb
    Posts
    140
    I know, they are connected 1:2.
    Core i7 950@4.0 Ghz, 1.180v, Venomus X
    Gigabyte x58 ud3r, 6gb Mushkin XP3, Kingston V+ "325" 64gb + WD Black 750gb
    GTX 470, TR Spitfire, 800/1900 + MSI GTX 470 Twin Frozr II 800/1900
    Corsair HX850

Page 1 of 2 12 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •