Page 1 of 2 12 LastLast
Results 1 to 25 of 45

Thread: Actual influence of flow rate on system temps

  1. #1
    Mr Swiftech
    Join Date
    Dec 2002
    Location
    Long Beach, CA
    Posts
    1,561

    Actual influence of flow rate on system temps

    THE ACTUAL INFLUENCE OF FLOW RATE ON SYSTEM TEMPS

    It’s been a while since anyone published real life data measuring the actual impact of adding graphics water-block (s) to a loop, and we were interested in observing how the latest EVGA FTW blocks would integrate with the Apogee XT. Taken individually, these blocks are designed to dissipate extreme amounts of heat very efficiently, but this also result in substantially higher pressure drop values than in earlier designs. So the question was, how do they work together?

    More specifically, the questions we wanted to answer were:

    1. What is the impact of adding a high-end graphic card to the CPU temperature, and does it affect the CPU maximum stable overclock?
    2. What is the impact of adding two high-end graphic cards in SLI to the CPU temperature; does it affect the CPU maximum stable overclock, and what GPU block configuration works best, serial or parallel?
    3. What is the thermal performance of the GPU cooling solutions under heavy stress in various configurations?


    Equipment:

    • For the purpose of these tests, we wanted to cover the largest possible audience in the enthusiast community. So we setup a new bench which we believe to represent a typical middle to upper-range system. It is composed of an MCR320 Drive radiator, with built-in MCP355 pump, and an Apogee XT waterblock; the loop uses ½ lines. The fans are Gentle Typhoon’s (D1225C12B5AP-15) running at 1850 rpm and rated at 28 dB. We chose them because they are popular, and we found that they do represent a good compromise between cooling performance and operating noise.
    • Components are connected to the loop with CPC quick-disconnect fittings; they are fairly restrictive, but the time they save in changing setups overshadows any other considerations.
    • The CPU is an early Ci7 920, Revision C0/C1 stepping 4.
    • For the Graphic cards, we wanted (2) EVGA GTX480 FTW, but they were unavailable at the time of testing, so we settled for (2) EVGA GTX470 FTW instead. Given the increasing popularity of the 470 for its overclockability and bang-for-the-buck factor, it’s not such a bad thing anyways.
    • The Motherboard is a Gigabyte EX58-UD3R, and the OS is Windows 7 Ultimate 64 Bit.


    Methodology:

    • The CPU maximum stable overclock was well established, since we have been using this same 920 ever since its introduction. It is 4095 Mhz (Intel Turbo mode on, and HT enabled), at 1.424v (after droop).
    • The GPU’s maximum stable overclock was established in the graphics tests using Furmark in extreme burn mode at 1920x1050 for a minimum of two hours, and further validated by running a 3D Marks (Vantage results are posted in the report).
    • Max stable overclock for one card was 898 MHz Core and 1050 MHz Memory, @1.087 Volts.
    • Max stable overclock for 2 cards in SLI was 825 MHz core and 1000 MHz memory, @1.087 Volts. Note: To ascertain that our Maximum stable overclock in SLI was not temperature related, we also tested the cards at 850Mhz using our extreme bench composed of (2) MCR320 radiators with (6) 82CFM fans, and (2) MCP655’s in series, and the test failed.

      In order to answer our initial questions, we conducted two sets of tests:

    • CPU load tests: In order to maintain consistency with previous test data, we ran our usual 8 instances of BurnK6. We logged the temperature results at 2 seconds intervals using CoreTemps. The average temperature of the 4 cores is reported.
    • GPU load tests: We used Furmark in extreme burning mode, windowed in 1920x1050, post processing off to enable 100% load to both GPU’s in SLI configuration, and logged the temperature results at 2 seconds intervals with GPUZ.


    Environmental Temperature recording:

    • Air temperature: each fan was equipped with a type T Thermocouples (accurate at +/- 0.1c) at the inlet, and the average of the 3 values is reported.
    • Coolant temperature was measured at the radiator inlet with a Type T thermocouple (accurate at +/- 0.1°C).


    We do hope that the following data will help in rationalizing the readers’ further setup decisions, and without further ado, here are the test results.

    First set of CPU tests featuring a single card in the loop:



    To our first question, “Impact on CPU temperature of adding one restrictive GPU waterblock in the same loop”, we see that between CPU tests #1 and #2 the increase in CPU temperature when adding a GTX470FTW is equal to 0.68°C (Note 1 above). We can also report that the CPU remained entirely stable under these test conditions.

    With further analysis, we can also determine the actual temperature increase solely due to the added heat generated by the GPU; it is calculated by (*) below and it is equal to: 0.34°C. This allows us to conclude that the added pressure drop in the loop actually contributed to the rise in CPU temperature by 0.34°C as calculated in (**) below. This is a very marginal increase considering the relatively high pressure drop of both blocks, and also considering that the reduced flow rate decreases the waterblock AND the radiator efficiency.

    (*) : (ΔT Water to Air test 2) – (ΔT Water to Air test 1): 4.63 – 4.29 = 0.34°C
    (**): (Note 1) – (*): 0.68 - 0.34 = 0.34°C

    Second set of CPU tests, featuring the SLI setup :



    To the first part of our second question “What is the impact of adding two high-end cards in SLI to the CPU temperature?” , CPU tests #3 and #4 show us that the temperature rise in the CPU is 1.26°C (note 2) when the cards are setup in parallel, and 1.84°C (note 3) when they are setup in series. In terms of CPU stability, the CPU remained fully stable in both cases.

    So, while the overall increase in CPU temperature remained nominal (about 2%) it is also interesting to note that the parallel setup shows a measurable advantage of 0.58°C over serial as calculated in (*) below, which can be 100% attributed to a substantially lower pressure drop at the system level. And this fact clearly answers the second part of our question #2: “what GPU block configuration works best, serial or parallel?” : Clearly, the higher flow rate in the CPU waterblock and in the radiator yield a better thermal resistance and result in lower temps in a parallel setup than in a serial setup.

    (*): Note 3 – note 2: 1.86 – 1.26= 0.58°C

    Finally, for those users who already have a GPU in their CPU loop, want to add a second one and need to know what to expect, the data presented in note 4 shows that a second VGA card installed in parallel will result in another 0.58°C rise in CPU temperature, whereas note 5 shows that installing the second card in series will add 1.16°C.

    Graphics Tests:

    The graphics stress tests are obviously also influenced by flow rate, and we will see how below. CPU temperature is reported for reference only, since there is very little load on the CPU during intensive graphics (50% on one core, under Furmark).



    We see a substantial increase in average GPU temperature from one card to two, ranging from 7.39 °C (calculated in Note 1) for a parallel setup to 7.99 °C for a serial (calculated in Note 2). But while 7 to 8 °C can seem like much, it is also important to remember that the overclock limitation in SLI mode was demonstrated during our initial setup NOT to be temperature related (see note in the above Methodology section).

    Finally, note 3 is of particular interest within the framework of this study, because it shows that even at the GPU level, a parallel setup with modern blocks such as those presented here remains a superior solution to serial, as evidenced by a 0.6°C advantage of parallel over serial.

    Conclusions:

    While the importance of flow rate is certainly not to be discarded when planning a system setup, as particularly evidenced by the differences found between parallel and serial VGA configurations, we see with the tested Swiftech components that the overall impact of this parameter remains nominal in terms of total system performance. This is due to the fact that these components are designed to be highly efficient at low flow.

    Results summary for reference:

    Last edited by gabe; 06-05-2010 at 12:28 PM.
    CEO Swiftech

  2. #2
    Xtreme Member
    Join Date
    Aug 2004
    Posts
    487
    Nice work Gabe!!

    Intel Core i7-2600K---> Koolance 370
    3 X EVGA 480 GTX SC's Watercooled--> Koolance VID-NX480 blocks, HWLabs GTX 480, Liang D5 Vario
    Koolance RP-45X2 Reservoir
    Corsair Obsidian 800D
    Gigabyte GA-Z68X-UD7-B3 MB
    SILVERSTONE ST1500 1500W PSU
    16G CORSAIR DOMINATOR-GT (4 x 4GB) PC3 12800
    2 X Kingston HyperX SH100S3B/120G SATA 3 in RAID 0
    1 X WD Caviar Black 1TB
    Dell 3007 WFP-HC
    Z-5500's
    Windows 7 64bit

  3. #3
    Mr Swiftech
    Join Date
    Dec 2002
    Location
    Long Beach, CA
    Posts
    1,561
    Quote Originally Posted by Section8 View Post
    Nice work Gabe!!
    Thanks, just trying to put things back in prospective. It is important to go back to basics from times to time :-)
    CEO Swiftech

  4. #4
    Xtreme Member
    Join Date
    Apr 2010
    Posts
    207
    Hi Gabe. Thanks for taking the time to test. I need to clarify something.

    1. For the 1st part, when the gpu was added to the cpu, was the gpu under stressed condtion or was it idling?

    2. If it was under idle condition, would you happen to have data on stressed gpu delta? Would be great to see delta for the CPU for standalone and with gpu.

    Apologies as this is off your thread topic.

    Wes

  5. #5
    Xtreme Member
    Join Date
    Mar 2009
    Location
    Indonesia
    Posts
    182
    Very informative, thanks for the effort Gabe,

    just an input, what about stressing both parameter ( folding or similar software ) ?
    920 @3.5ghz 1.06V
    EVGA 762
    Corsair Dominator GT 2000
    Manli GTX460 SLI
    several HDD
    corsair ax1200

  6. #6
    Xtreme Member
    Join Date
    Jul 2006
    Location
    Singapore
    Posts
    459
    Looks like a 120.3 rad is powerful enough to cool a CPU and 2 x GPU at respectable temps.

    Phil

    i7 4GHz ♦ Asus R2G ♦ OCZ Intel XMP ♦ Asus 5870 ♦ Crucial M4 ♦ Swiftech ♦ Koolance


    X4 3.5GHz ♦ Biostar 890GXE ♦ OCZ AMD BE ♦ Asus 8800 ♦ WD Veloci ♦ Swiftech ♦ XSPC

  7. #7
    Xtreme Addict
    Join Date
    Feb 2005
    Location
    Tennessee
    Posts
    1,171
    It really is stunning when you consider the improvements that have been made in LC over the past few years. I remember when everyone used to go on and on about how you had to have this incredible flowrate to have good temps. Granted, I don't think its wise to be like some of the European setups with 50 elbows per setup on 1/4" tubing, but watercooling sure has come a long way.

  8. #8
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    This is really nice Gabe

    Greatly simple and concise in presentation and the results are very telling

    Would be interesting to see what happens when you load both GPUs and the CPU together, but I think that's a rare use case (basically only happens when you deliberately load both subsystems).

  9. #9
    Xtreme Member
    Join Date
    Apr 2010
    Posts
    207
    Quote Originally Posted by Vapor View Post
    This is really nice Gabe

    Greatly simple and concise in presentation and the results are very telling

    Would be interesting to see what happens when you load both GPUs and the CPU together, but I think that's a rare use case (basically only happens when you deliberately load both subsystems).
    The results will be useful for folks who are on Adobe CS5 programs like Premier Pro, etc that stresses the CPU and offloads encoding/processing on the GPU. Nonetheless, appreciate the hardwork Gabe.

    Wes

  10. #10
    Mr Swiftech
    Join Date
    Dec 2002
    Location
    Long Beach, CA
    Posts
    1,561
    Quote Originally Posted by wesley View Post
    Hi Gabe. Thanks for taking the time to test. I need to clarify something.

    1. For the 1st part, when the gpu was added to the cpu, was the gpu under stressed condtion or was it idling?

    2. If it was under idle condition, would you happen to have data on stressed gpu delta? Would be great to see delta for the CPU for standalone and with gpu.

    Apologies as this is off your thread topic.

    Wes
    Yes, in the CPU tests, the GPU's are mostly at idle.
    Stressed GPU conditions for (1) card, are shown in Test #1 of the Graphics test.

    Quote Originally Posted by nyeah View Post
    Very informative, thanks for the effort Gabe,

    just an input, what about stressing both parameter ( folding or similar software ) ?
    I knew this would come up (Folding), and I downloaded Folding for GPU, both in console, and command line, but I was unable to place any significant load onto the GPU's. Maybe I didn't wait long enough, maybe I didn't set the correct parameters, but GPU load remained the 1% or less. I think it might be possible, but I am just not sure how to do it.

    If any of the Folding@home folks was willing to guide me on how to place a constant 100% load on the GPU's I'd be more than happy to do another set of tests, with the reservations noted below.

    Quote Originally Posted by Vapor View Post
    This is really nice Gabe

    Greatly simple and concise in presentation and the results are very telling

    Would be interesting to see what happens when you load both GPUs and the CPU together, but I think that's a rare use case (basically only happens when you deliberately load both subsystems).
    I was also interested to test this, and looked for ways to do it (see above comments re. Folding). But if you think about it, all it would really accomplish is to scale all the results upwards, proportionately with the increased heat load.

    This would present an interest in terms of possibly re-evaluating the Max CPU and/or Max GPU overclock, and accordingly re-evaluating the size of the heat exchanger and/or the choice of fans. But within the framework of this particular study, emphasis was placed on the impact of flow rate on components cooling. We wanted to establish what the range of values were in the vast majority of operating conditions at the time of this study.

    Quote Originally Posted by wesley View Post
    The results will be useful for folks who are on Adobe CS5 programs like Premier Pro, etc that stresses the CPU and offloads encoding/processing on the GPU. Nonetheless, appreciate the hardwork Gabe.

    Wes
    Same comment as above. If the overall heat load increases, all the temps will go up, but the relative impact of the flow rate will remain the same.
    Last edited by gabe; 06-05-2010 at 12:26 PM.
    CEO Swiftech

  11. #11
    Registered User
    Join Date
    Mar 2010
    Location
    Morgan Hill, California
    Posts
    14
    Amazing!

    Maybe link it in the stickied "Information/Guides & Reviews/Tests & Galleries" thread?

  12. #12
    Xtreme Cruncher
    Join Date
    Aug 2009
    Location
    Huntsville, AL
    Posts
    671
    I love _real_ vendor testing. Thanks Mr. Swiftech!
    upgrading...

  13. #13
    Mr Swiftech
    Join Date
    Dec 2002
    Location
    Long Beach, CA
    Posts
    1,561
    Quote Originally Posted by GTOViper View Post
    Amazing!

    Maybe link it in the stickied "Information/Guides & Reviews/Tests & Galleries" thread?
    Quote Originally Posted by meanmoe View Post
    I love _real_ vendor testing. Thanks Mr. Swiftech!
    Thanks.. actually this post follows in the glorious footsteps of Stew Forrester (AKA Cathar) who published a related article here in June of 2007, almost 3 years to the date!

    As is often the case, the laws of physics were not broken, and the scale in temperature variations do correlate very well; and this is essentially what we sought to illustrate.
    CEO Swiftech

  14. #14
    Xtreme Member
    Join Date
    Apr 2010
    Posts
    207
    Quote Originally Posted by gabe View Post
    Same comment as above. If the overall heat load increases, all the temps will go up, but the relative impact of the flow rate will remain the same.
    Hi Gabe,

    Thanks for taking the time to reply. Testing is extremely time consuming and I understand your test boundaries and parameters.

    I am a professional photographer and avid gamer. Hence there are times when both GPU and CPU are stressed out concurrently.

    Wesley

  15. #15
    Mr Swiftech
    Join Date
    Dec 2002
    Location
    Long Beach, CA
    Posts
    1,561
    Quote Originally Posted by wesley View Post
    Hi Gabe,

    Thanks for taking the time to reply. Testing is extremely time consuming and I understand your test boundaries and parameters.

    I am a professional photographer and avid gamer. Hence there are times when both GPU and CPU are stressed out concurrently.

    Wesley
    There is no denying that it is possible to load both, in fact I just came accross BOINC, which might end up being able to do the trick. I am still looking for a way to load the GPU's to 100% though, as right now it remains obstinately stuck at a 57% load. Another chapter of this review next then..
    CEO Swiftech

  16. #16
    Xtreme Enthusiast
    Join Date
    Apr 2008
    Location
    France
    Posts
    950
    Very good job Confirms numerically what we knew empirically about parallel GPU being the best solution (with a CPU in loop). This will be of great help when giving advice for newcomers.

    24/7 running quiet and nice

  17. #17
    Xtreme Enthusiast
    Join Date
    Feb 2008
    Location
    Canaduh
    Posts
    731
    gabe
    why not running furmark and linx at the same time for loading both gpu and cpu @ 100%
    Intel i7 980x / 3001B331
    HK3.0+LaingDDCPRO+XSPCRX360+1xMCR220
    EVGA Classified X58+EK FB
    6GB Corsair Dominator GT 1866 7-8-7-20(TR3X6G1866C7GT)
    ASUS GTX580
    Enermax Revolution+85 950w
    Corsair Obsidian






  18. #18
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    Alberta, Canada
    Posts
    1,264
    Wow great stuff. Certainly helped clarify things. I'm awaiting a kodomo 5800 and was worried what the effect on flowrate would be but after seeing this I'm sure things will be peachy

    Keep up the great work.
    Feedanator 7.0
    CASE:R5|PSU:850G2|CPU:i7 6850K|MB:x99 Ultra|RAM:8x4 2666|GPU:980TI|SSD:BPX256/Evo500|SOUND:2i4/HS8
    LCD:XB271HU|OS:Win10|INPUT:G900/K70 |HS/F:H115i

  19. #19
    Registered User
    Join Date
    Sep 2008
    Location
    Ukraine
    Posts
    44
    Quote Originally Posted by mk-ultra View Post
    gabe
    why not running furmark and linx at the same time for loading both gpu and cpu @ 100%
    Same proposal here, just set thread number to 7 or even 6 in Linx to give Furmark some CPU resources for rendering.

  20. #20
    Xtreme Addict
    Join Date
    Jun 2007
    Posts
    1,442
    Nice read Gabe, really appreciate taking the time to do the testing and sharing it.

    @mk-ultra, Linx is not good for temp testing, since it is an intermittent load and will introduce high error. But your idea never the less is one way of doing it.

    Like stated, Furmark by default loads only core 0 (affinity) as it only needs one thread. But the cpu load on that one thread is not as intense as prime or core damage (programs with stable enough load for temp testing), and cpu temp on the furmark loaded core will be 3-4C less than if loaded with prime).

    I was testing 1 loop vs 2 in mine and using prime or coredamage + furmark (2 rads). You can

    1) load just cpu with prime or coredamage. For example core damage cpu temps = 72, 73, 72, 69.

    2) then like suggested, furmark on 1 thread, run prime/coredamage on other 7 threads. But temps will be = 71, 76, 75, 72 (since cpu temps 3-4C lower on core 0 since 1 thread has much lower cpu load of furmark). But after water equilibrates from combined cpu/furmark load, you can then change affinity to all 8 cores in prime/coredamage and quickly take cpu temps to avoid the 3-4C lower cpu temps on that one core. Then temps = 75, 76, 75, 72...then after minute or so, temps start to drop from reduced furmark load (as framerate drops to half in furmark).

    Or you could just test 7 threads cpu vs cpu/gpu, thought that isnt exactly apples to apples either since it will be 7 threads cpu vs 7 threads cpu + 1 thread on gpu.

    But I would rather see a realistic situation, ie most taxing real program that loads both cpu and gpu, like I think Gabe is trying to do.

    But if use furmark + coredamage/prime will likely see ~6-7C higher cpu temps with cpu/gpu load vs just cpu load, looking at his delta air to waters and ignoring small rad efficiency diffs, etc. Mine only 3C higher with 2 large rads and gtx 295.
    Last edited by rge; 06-06-2010 at 07:49 AM.

  21. #21
    Xtreme Member
    Join Date
    Jan 2009
    Location
    Spokompton, WA
    Posts
    188
    Nice to see the effects of flow rate, but the heat dump from the other components is a big factor also.

    I just switched from a single loop containing overclocked q9650 w/ HK 3.0, chipset, mosfet, 2 EK 5850 blocks w/ overclocked cards, swiftech MCR 420, MCR 220 and two DDC 3.2's to two separate loops. Loop one was CPU, chipset, mosfet, MCR 420 and DDC 3.2. Loop two was EK 5850 block, MCR 220, and DDC 3.2. By separating to two loops I cut CPU temps by 8 degrees C with everything full load folding on both CPU and GPU. GPU temps went up only being on the 220, but still stay around 40 degrees C.
    Asus Rampage III Formula
    I7 970 (200x23=4610)
    EK Supreme HF Copper
    Swiftech 420 QP w/ (4) Scythe GT AP-15 (1850 RPM)
    Swiftech 355 w/ ek X-Top v2

    (3) Asus 5850 (1050/1250/1.3v)
    (3) EK 5850 FC
    Swiftech 220 QP w/ (2) Scythe GT AP-15 (1850 RPM)
    Swiftech 355 w/ ek X-Top v2

    Cosair HX850
    (3) 2GB Gskill F3-12800CL7T-6GBPI
    (1) Intel X25-M G2
    (3) WD Black 1TB



  22. #22
    Xtreme Enthusiast
    Join Date
    Feb 2008
    Location
    Canaduh
    Posts
    731
    yeah didn't tough about the end of pass where usage drops.
    Last edited by mk-ultra; 06-06-2010 at 08:09 AM.
    Intel i7 980x / 3001B331
    HK3.0+LaingDDCPRO+XSPCRX360+1xMCR220
    EVGA Classified X58+EK FB
    6GB Corsair Dominator GT 1866 7-8-7-20(TR3X6G1866C7GT)
    ASUS GTX580
    Enermax Revolution+85 950w
    Corsair Obsidian






  23. #23
    Xtreme Addict
    Join Date
    Jun 2007
    Posts
    1,442
    Quote Originally Posted by gergregg View Post
    Nice to see the effects of flow rate, but the heat dump from the other components is a big factor also.
    I think his title is alluding to the serial vs parallel gpu differences, where flow is the determinant. But yes, the difference in cpu vs cpu+gpu is a lot from idle TDP of gpu as well as flow. You would have to unplug gpu and use separate air cooled gpu to test just flow differences, but not real world use.

  24. #24
    Xtreme Member
    Join Date
    Jan 2009
    Location
    Spokompton, WA
    Posts
    188
    Quote Originally Posted by rge View Post
    I think his title is alluding to the serial vs parallel gpu differences, where flow is the determinant. But yes, the difference in cpu vs cpu+gpu is a lot from idle TDP of gpu as well as flow. You would have to unplug gpu and use separate air cooled gpu to test just flow differences, but not real world use.
    I agree, the testing is just there to show the effects of reduced flow rate on CPU temps. Which is really good information for people adding other small blocks to their loops (NB, FC MB, Mosfet) which don't add a lot of heat. People will know that the reduction in flow rate and small amount of heat will have very little effect on overall CPU temps.

    Good information to have data on.
    Asus Rampage III Formula
    I7 970 (200x23=4610)
    EK Supreme HF Copper
    Swiftech 420 QP w/ (4) Scythe GT AP-15 (1850 RPM)
    Swiftech 355 w/ ek X-Top v2

    (3) Asus 5850 (1050/1250/1.3v)
    (3) EK 5850 FC
    Swiftech 220 QP w/ (2) Scythe GT AP-15 (1850 RPM)
    Swiftech 355 w/ ek X-Top v2

    Cosair HX850
    (3) 2GB Gskill F3-12800CL7T-6GBPI
    (1) Intel X25-M G2
    (3) WD Black 1TB



  25. #25
    Mr Swiftech
    Join Date
    Dec 2002
    Location
    Long Beach, CA
    Posts
    1,561
    Quote Originally Posted by gmat View Post
    Very good job Confirms numerically what we knew empirically about parallel GPU being the best solution (with a CPU in loop). This will be of great help when giving advice for newcomers.
    If your remark in parenthesis "(with a CPU in the loop)" signifies that you believe that a dedicated loop will reverse the results, then do I have news for you! We also tested a dedicated GPU loop performance, but did not publish the results yet in order to stay in sharp focus on the topic. Parallel loop performance of these blocks was also superior to serial in a dedicated loop.. this will be discussed in another chapter of this review.

    Quote Originally Posted by mk-ultra View Post
    gabe
    why not running furmark and linx at the same time for loading both gpu and cpu @ 100%
    haven't tried linx, but the problem is not with CPU load, it is with GPU load. As you load up the CPU, the second GPU drops in usage to 0.

    Quote Originally Posted by rge View Post
    I think his title is alluding to the serial vs parallel gpu differences, where flow is the determinant. But yes, the difference in cpu vs cpu+gpu is a lot from idle TDP of gpu as well as flow. You would have to unplug gpu and use separate air cooled gpu to test just flow differences, but not real world use.
    rge, I think you missed the part where we precisely showed the impact of the thermal load from the GPU at idle onto the CPU to be 0.34C , and the impact of reduced flow rate to be another 0.34C, for a total impact of adding 1 GPU to a CPU loop of 0.68C.
    Last edited by gabe; 06-06-2010 at 05:34 PM.
    CEO Swiftech

Page 1 of 2 12 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •