Page 1 of 4 1234 LastLast
Results 1 to 25 of 83

Thread: Swiftech® releases new Multi-port Apogee™ HD Waterblock & MCRx20 Drive R3 Rads

  1. #1
    Mr Swiftech
    Join Date
    Dec 2002
    Location
    Long Beach, CA
    Posts
    1,561

    Swiftech® releases new Multi-port Apogee™ HD Waterblock & MCRx20 Drive R3 Rads

    The Apogee™ HD is Swiftech's new flagship CPU waterblock.





    Close to 18 months in the making, it was designed to surpass its predecessor the Apogee™ XT in all critical areas:

    • Improved thermal performance with emphasis on Intel® and AMD® latest and upcoming processors: the HD is Socket LGA2011 (Intel®) and "Bulldozer" (AMD®) ready.



    • Reduced flow restriction compared to the Apogee XT Rev2.


    • Innovative features, with the introduction of the multi-port connectivity: two more outlet ports have been added for dramatic flow rate improvements in multiple waterblock configurations when used with the new MCRx20 Drive Rev3 radiators that now also include two additional inlet ports (3 total);


    • Improved thermal performance out of the box with the inclusion of high performance PK1 thermal compound.

    • Cosmetic appeal: the Apogee HD is now available in two colors, Classic black or Fashionable white to match high-end case offerings from NZXT™, Silverstone®, Thermaltake and many others.


    • Elegant multi-mount hold-down plate for AMD Processors


    • Reduced cost of ownership: $74.95 MSRP


    Compatibility:

    Intel

    The stock hold-down plate is compatible with all Intel Desktop Processors: 775, 1155, 1156, 1366, 2011
    Motherboard backplates:
    • 1155/56 & 1366 included

    • 775 sent free of charge upon request

    • Mounting screws springs for socket 2011 will be made available soon


    AMD

    A mounting kit for AMD® processors, sockets 754, 939, 940, AM2, AM3, 770, F, FM1 and legacy Intel® Server socket 771 is available and shipped to users free (shipping not included) upon request.

    Internals:

    The housing is precision-machined from black or white polyacetal copolymer (POM).


    The base-plate is precision-machined from C110 copper. Thermal design of the cooling engine is characterized by Swiftech's fin/pin matrix composed of 225 µm (0.009") micro structures; the matrix has been further refined with variable width cross channels to improve flow rate without affecting thermal efficiency, and fabrication quality has also been enhanced to reduce micro machining imperfections.


    The mating surface to the CPU is mirror polished, in full compliance with Swiftech's highest quality standards


    Flow Parallelization: "How to create a mixed serial+parallel configuration in complex loops for dramatically improved flow performance."

    I am reprinting below the entire text listed on our site under this chapter:

    Among the most obvious benefits of harnessing the power of water-cooling is the ability to daisy-chain multiple devices for the CPU, Graphics, Chipset, and even memory.
    Up until now, the most common way to do this has been to connect the waterblocks in series. In this type of configuration however, the pressure drop generated by each one of the devices cumulates, which substantially reduces the overall flow rate in the loop; and as the flow rate diminishes, so does the thermal performance of the system. Many extreme users have been resorting to adding a second pump to their system to mitigate this effect.

    There is another strategy to connect multiple water blocks: the parallel configuration. It is very advantageous because in this type of setup, when two devices are parallelized, the flow is divided in half, but the pressure drop is divided by a factor of four, thus alleviating the need for a second pump. However, it necessitates splitting the main line using Y connectors, and it is seldom used because connectivity is awkward and cumbersome.

    Enter the Multi-port Apogee™ HD waterblock, and MCR Drive Rev3 Radiators. With two additional outlet ports for the Apogee™ HD and two additional inlet ports for the MCR Drive Rev3 radiator, it is now possible to conveniently setup a high flow multi-block loop without using splitters. We will show below that while it always remains preferable to keep the CPU waterblock in series with the main line whenever possible, all other electronic devices in the loop are perfect candidates for parallelization. The resulting configuration is a mixed serial + parallel setup, i.e. the best of both worlds!

    The following flow-charts illustrate two extreme setups (CPU + tripe SLI + chipset + memory) and quantify an order of magnitude in flow performance that can be gained from using a mixed serial + parallel configuration:



    As mentioned earlier however, the consequence of parallelizing cooling devices is that the flow rate inside of said devices is also divided, therefore slower. So we now need to introduce another concept to further qualify the rationale behind parallelization: the heat flux generated by the different electronic devices, i.e. the rate of heat energy that they transfer through a given surface.

    CPU’s

    • Modern CPUs generate a lot of heat (up to and sometimes higher than 200 W), which is transferred through a very small die surface (the die is the actual silicon, and it is usually protected by a metallic plate called a heat spreader or IHS). Among other things, what it means in practical terms is that higher flow rates will have relatively more impact on the CPU operating temperature than on any other devices. For this reason, and in most configurations, the Apogee™ HD CPU waterblock will preferably always be connected in series with the main line, so it can benefit from the highest possible flow rate.

    ALL other devices except radiators

    • GPUs, whether they have an IHS or not, also generate a lot of heat (sometimes even more than CPU’s). However the physical size of the dies is substantially larger than that of any desktop CPUs. The resulting lower heat flux makes GPUs much less sensitive to flow rate. In fact, when both are liquid cooled, we can readily observe that the GPU operating temperature is always much lower than that of the CPU. For this reason, it is 1/ always preferable to parallelize multiple graphics cards with each-other, and 2/ when one or more GPU blocks are used in conjunction with one or more other devices like chipset and/or memory, it is always beneficial to parallelize the GPU(s) with said devices using the Apogee™ HD multi-port option.



    • Chipsets, Memory, Hard Drives and pretty much everything else one would want to liquid cool in a PC can also be placed in the same use-category as GPUs, either because they have a low or moderate heat flux, or because the total amount of heat emitted by the devices can be handled without sophisticated cooling techniques. What it boils down to, is that they are even less flow-sensitive, and we submit that parallelization of these blocks should in fact become a standard.

    Radiators

    The higher the flow rate inside of a radiator, the quicker it will dissipate heat. For this reason, radiators will always remain on the primary line, just like the CPU block, in order to benefit from the highest possible flow rate.


    In conclusion, we can see that the multi-port Apogee™ HD when coupled with the MCR Drive Rev3 radiators makes a compelling case for optimizing complex loops: it maximizes the flow rate where it matters most (on the CPU, and radiator) while offering a splitter-free parallelization of up to three other components (GPUs, chipset, etc.).

    Alternate configuration:

    The Apogee™ HD allows an alternate configuration: by using the main outlet as an entry port instead of the inlet, you can then parallelize the CPU with up to two more components: a second CPU, a GPU, a Chipset, etc. While it remains true as explained earlier that CPUs benefit from higher flow rate than other components, the few degrees in performance gains might not be consequential to some users. In these situations then, using the “alternate” configuration could for example be beneficial as follows:

    • When cooling two CPUs, it might be desirable to parallelize them in order to maintain the exact same temperature for each CPU.

    • For one of the quickest upgrades ever: one could get started with a CPU-only loop, and use the alternate configuration initially. Then when installing additional water-blocks (graphics for example), all would be needed is to drain the liquid out, replace the plug(s) by fitting(s), and connect the tubes to the new device(s). There is no need to remove the Apogee HD, no need to remove and recut tubes to length: the existing loop doesn’t need to be modified.
    CEO Swiftech

  2. #2
    Xtreme Member
    Join Date
    Aug 2006
    Posts
    114
    wow looks top notch swiftech really makes some quality products cant wait for a review of the apogee hd
    phenom 2 940 stock
    gskill 4gb 1066 ddr2
    2 1.5Tb seagate hds in raid 0
    30gb ocz core series hd for os
    8800gts 640
    xigamatek 850w ps
    water cooling cpu: dtek fuzion 2, swiftech 320, 3 ultra kazes, d5 with detroit top
    custom acrylic case in progress :P

  3. #3
    Xtreme Addict
    Join Date
    Apr 2011
    Location
    North Queensland Australia
    Posts
    1,445
    Just as I got a XT Rev2 top haha >.<

    -PB
    -Project Sakura-
    Intel i7 860 @ 4.0Ghz, Asus Maximus III Formula, 8GB G-Skill Ripjaws X F3 (@ 1600Mhz), 2x GTX 295 Quad SLI
    2x 120GB OCZ Vertex 2 RAID 0, OCZ ZX 1000W, NZXT Phantom (Pink), Dell SX2210T Touch Screen, Windows 8.1 Pro

    Koolance RP-401X2 1.1 (w/ Swiftech MCP35X), XSPC EX420, XSPC X-Flow 240, DT Sniper, EK-FC 295s (w/ RAM Blocks), Enzotech M3F Mosfet+NB/SB

  4. #4
    I am Xtreme
    Join Date
    Oct 2005
    Location
    Grande Prairie, AB, CAN
    Posts
    6,140
    Definitely interested to see the difference between using one, two, or three outlets on the block. Both in terms of flow and temperatures.

  5. #5
    Xtreme Mentor
    Join Date
    Sep 2007
    Location
    Dallas
    Posts
    4,467
    Nice. I would have thought dual inlets/outlets, but this seems like a totally different concept, although I think someone made a block similar to this type a few years ago, but it never gained traction. Looks like I will be ordering one of these, if for nothing else, I like the new look. Great job Gabe and Stephen for thinking outside the box and coming up with something innovative.
    CPUID http://valid.canardpc.com/show_oc.php?id=484051
    http://valid.canardpc.com/show_oc.php?id=484051
    http://valid.canardpc.com/show_oc.php?id=554982
    New DO Stepping http://valid.canardpc.com/show_oc.php?id=555012
    4.8Ghz - http://valid.canardpc.com/show_oc.php?id=794165

    Desk Build
    FX8120 @ 4.6Ghz 24/7 / Asus Crosshair V /HD7970/ 8Gb (4x2Gb) Gskill 2133Mhz / Intel 320 160Gb OS Drive, WD 256GB Game Storage

    W/C System
    (CPU) Swiftech HD (GPU) EK HD7970 with backplate (RAM) MIPS Ram block (Rad/Pump) 3 x Thermochill 120.3 triple rads and Dual MCP355's with Heatkiller dual top and Cyberdruid Prism res / B*P/Koolance Compression Fittings and Quick Disconnects.

  6. #6
    Xtreme Mentor
    Join Date
    Mar 2006
    Location
    Evje, Norway
    Posts
    3,419
    Quote Originally Posted by lowfat View Post
    Definitely interested to see the difference between using one, two, or three outlets on the block. Both in terms of flow and temperatures.
    My guess is neglible. I think most of the resistance is in the fins, so when the water has reached the otherside it wont matter as much where it goes
    Quote Originally Posted by iddqd View Post
    Not to be outdone by rival ATi, nVidia's going to offer its own drivers on EA Download Manager.
    X2 555 @ B55 @ 4050 1.4v, NB @ 2700 1.35v Fuzion V1
    Gigabyte 890gpa-ud3h v2.1
    HD6950 2GB swiftech MCW60 @ 1000mhz, 1.168v 1515mhz memory
    Corsair Vengeance 2x4GB 1866 cas 9 @ 1800 8.9.8.27.41 1T 110ns 1.605v
    C300 64GB, 2X Seagate barracuda green LP 2TB, Essence STX, Zalman ZM750-HP
    DDC 3.2/petras, PA120.3 ek-res400, Stackers STC-01,
    Dell U2412m, G110, G9x, Razer Scarab

  7. #7
    Mr Swiftech
    Join Date
    Dec 2002
    Location
    Long Beach, CA
    Posts
    1,561
    Quote Originally Posted by lowfat View Post
    Definitely interested to see the difference between using one, two, or three outlets on the block. Both in terms of flow and temperatures.
    not sure you clearly understand the use.. this is meant to be used to parallelize multiple water-block after the CPU. You wouldn't get any measurable improvement by simply linking the block outlets directly to the Rad because the only PD yo'd cut would be that of the tubes, which wouldn't be measurable.

    Quote Originally Posted by Utnorris View Post
    Nice. I would have thought dual inlets/outlets, but this seems like a totally different concept, although I think someone made a block similar to this type a few years ago, but it never gained traction. Looks like I will be ordering one of these, if for nothing else, I like the new look. Great job Gabe and Stephen for thinking outside the box and coming up with something innovative.
    Yeah, there was a TEC block with dual inlet and dual outlet 7 or 8 years ago.. but you are right, this is an entirely different concept. ppl with multiple w/b's spend hundreds on pumps to jack-up their F/R. using this block with the MCR Drive totally cuts the P/D, alleviating need for a second pump. not to say you couldn't use a second pump with this though, you'd just get a hell of a lot more flow in the CPU block that way which could result in seriously measurable perf increases!
    CEO Swiftech

  8. #8
    Xtremely High Voltage Sparky's Avatar
    Join Date
    Mar 2006
    Location
    Ohio, USA
    Posts
    16,040
    Interesting, it is almost like the block also serves as a proportioning valve, so to speak.
    The Cardboard Master
    Crunch with us, the XS WCG team
    Intel Core i7 2600k @ 4.5GHz, 16GB DDR3-1600, Radeon 7950 @ 1000/1250, Win 10 Pro x64

  9. #9
    Xtreme Mentor
    Join Date
    Mar 2006
    Location
    Evje, Norway
    Posts
    3,419
    1/2 nipples and 7/16 or 1/2 tubing for the main route (cpu and gfx) and 3/8 or smaller nipples/tubing for the mb and ram, i think that would be nice (tho i never watercool the mb and ram)
    Quote Originally Posted by iddqd View Post
    Not to be outdone by rival ATi, nVidia's going to offer its own drivers on EA Download Manager.
    X2 555 @ B55 @ 4050 1.4v, NB @ 2700 1.35v Fuzion V1
    Gigabyte 890gpa-ud3h v2.1
    HD6950 2GB swiftech MCW60 @ 1000mhz, 1.168v 1515mhz memory
    Corsair Vengeance 2x4GB 1866 cas 9 @ 1800 8.9.8.27.41 1T 110ns 1.605v
    C300 64GB, 2X Seagate barracuda green LP 2TB, Essence STX, Zalman ZM750-HP
    DDC 3.2/petras, PA120.3 ek-res400, Stackers STC-01,
    Dell U2412m, G110, G9x, Razer Scarab

  10. #10
    Xtreme Addict
    Join Date
    Feb 2007
    Location
    Denmark
    Posts
    1,450
    Two loops on block

  11. #11
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    Hong Kong
    Posts
    1,905
    So if you are running 1 CPU block, 1 GPU block and 3 rads, would there be any benefit of using multiple outlets to say, rad and GPU, to parrallelise the flow? Since rads have such a low pressure drop anyway, I'm not sure.
    -


    "Language cuts the grooves in which our thoughts must move" | Frank Herbert, The Santaroga Barrier
    2600K | GTX 580 SLI | Asus MIV Gene-Z | 16GB @ 1600 | Silverstone Strider 1200W Gold | Crucial C300 64 | Crucial M4 64 | Intel X25-M 160 G2 | OCZ Vertex 60 | Hitachi 2TB | WD 320

  12. #12
    Xtreme Addict
    Join Date
    Apr 2011
    Location
    North Queensland Australia
    Posts
    1,445
    ^^^^what he said^^^^

    Multiple rads per one block.

    This sure does open up a very interesting can of worms

    Great innovation Swiftech!

    -PB
    -Project Sakura-
    Intel i7 860 @ 4.0Ghz, Asus Maximus III Formula, 8GB G-Skill Ripjaws X F3 (@ 1600Mhz), 2x GTX 295 Quad SLI
    2x 120GB OCZ Vertex 2 RAID 0, OCZ ZX 1000W, NZXT Phantom (Pink), Dell SX2210T Touch Screen, Windows 8.1 Pro

    Koolance RP-401X2 1.1 (w/ Swiftech MCP35X), XSPC EX420, XSPC X-Flow 240, DT Sniper, EK-FC 295s (w/ RAM Blocks), Enzotech M3F Mosfet+NB/SB

  13. #13
    Xtreme Member
    Join Date
    Jan 2010
    Location
    Germany
    Posts
    189
    very nice

  14. #14
    Xtreme Addict
    Join Date
    Jan 2008
    Location
    Bonnie Scotland / Sunny England
    Posts
    1,363
    Wow... I'm stunned - really I am!
    PROJECT :: The Xtreme (WET) Dream!!!

    PERSONAL H2O BESTS :
    E8600 @ 4.8GHz
    E6750 @ 4GHz QX9650 @ 4.6GHz
    i7 920 @ 4.6GHz

    PERSONAL AIR BESTS :
    Sempron140 @ 4Ghz (Stock Cooler)
    i7 3960x @ 5.4ghz (Air Cooler)

    Bex : "Who said girls can't play PC games or overclock!? Do I look like your imagination!?"
    Aaron : "TBH, a girl doing all that is a pretty perfect girl!"
    Swift_Wraith : "could someone please check bex for a penis?"

  15. #15
    Xtreme Enthusiast
    Join Date
    Feb 2009
    Posts
    531
    It looks...weird. After so many years of watching Swiftech shiny plated blocks, this just doesn't totally fit, as if it weren't swiftech at all. You can clearly see the path they have followed:











    I don't know, if it were shiny (although I bet it would be more expensive) it would look again as Swiftech products have always been, and I believe its a good idea to keep walking the path you started years ago...
    Quote Originally Posted by NKrader View Post
    im sure bill gates has always wanted OLED Toilet Paper wipe his butt with steve jobs talking about ipad..
    Mini-review: Q6600 vs i5 2500K. Gpu scaling on games.

  16. #16
    Xtreme Mentor
    Join Date
    Sep 2007
    Location
    Dallas
    Posts
    4,467
    Personally I like the black. I am kind of tired of the shiny silver look. It's like all the MB's with the red/black theme, nice at first, but are now overdone.
    CPUID http://valid.canardpc.com/show_oc.php?id=484051
    http://valid.canardpc.com/show_oc.php?id=484051
    http://valid.canardpc.com/show_oc.php?id=554982
    New DO Stepping http://valid.canardpc.com/show_oc.php?id=555012
    4.8Ghz - http://valid.canardpc.com/show_oc.php?id=794165

    Desk Build
    FX8120 @ 4.6Ghz 24/7 / Asus Crosshair V /HD7970/ 8Gb (4x2Gb) Gskill 2133Mhz / Intel 320 160Gb OS Drive, WD 256GB Game Storage

    W/C System
    (CPU) Swiftech HD (GPU) EK HD7970 with backplate (RAM) MIPS Ram block (Rad/Pump) 3 x Thermochill 120.3 triple rads and Dual MCP355's with Heatkiller dual top and Cyberdruid Prism res / B*P/Koolance Compression Fittings and Quick Disconnects.

  17. #17
    Mr Swiftech
    Join Date
    Dec 2002
    Location
    Long Beach, CA
    Posts
    1,561
    Quote Originally Posted by CedricFP View Post
    So if you are running 1 CPU block, 1 GPU block and 3 rads, would there be any benefit of using multiple outlets to say, rad and GPU, to parrallelise the flow? Since rads have such a low pressure drop anyway, I'm not sure.
    No there would'nt. The last thing you want to do is parallelize rads... You need all the flow you can get in the rads.

    The interest is primarily when you are cooling multiple devices after the CPU, like GPU and Chipset and Others.. Was just talking to a guy on australian forum who has 8 blocks.. all in series.. a perfect candidate for that. if you only have a GPU after the CPU, then you don't need this feature. Even if you had 2 or 3 GPU's with FC blocks you would'nt use it. You'd parallelize the GPU's with each other, that's all. I s'ppose you could parallelize GPU's if you were using Hybrid W/Bs like MCW82, but it wouldn't be as elegant/convenient as using a bridge.

    Quote Originally Posted by paulbagz View Post
    ^^^^what he said^^^^

    Multiple rads per one block.

    This sure does open up a very interesting can of worms

    Great innovation Swiftech!

    -PB
    Nooooo.. see above about rads.
    CEO Swiftech

  18. #18
    Registered User
    Join Date
    Jul 2011
    Posts
    48
    Very interesting, thank you. I'm wondering, are there plans to incorporate such a design on an MCW82 type block? With waterblocks for the ram and vreg and small tubing coming off the gpu block? That'd be even more interesting in my opinion, for people that don't water cool their ram or motherboard.

  19. #19
    Mr Swiftech
    Join Date
    Dec 2002
    Location
    Long Beach, CA
    Posts
    1,561
    CEO Swiftech

  20. #20
    Copie
    Guest
    Quote Originally Posted by gabe View Post
    No there would'nt. The last thing you want to do is parallelize rads... You need all the flow you can get in the rads.

    The interest is primarily when you are cooling multiple devices after the CPU, like GPU and Chipset and Others.. Was just talking to a guy on australian forum who has 8 blocks.. all in series.. a perfect candidate for that. if you only have a GPU after the CPU, then you don't need this feature. Even if you had 2 or 3 GPU's with FC blocks you would'nt use it. You'd parallelize the GPU's with each other, that's all. I s'ppose you could parallelize GPU's if you were using Hybrid W/Bs like MCW82, but it wouldn't be as elegant/convenient as using a bridge.



    Nooooo.. see above about rads.
    Im on here as well Gabe

    Anyways ill ask the same question as i did on there.
    Correct me if i am wrong, but in Parallel you are effectively halving the flow per block added correct?

    So for 2 blocks with 2.0gpm gets split into 1gpm, 3 blocks .067gpm and so on?
    Im sure one of the resident guys on here will come and point me in the right direction no doubt.

    And so far its 6 blocks (2x CPU-370's 1x SR-2 MIPS full mobo block 1x MIPS VRM block 2x EK GTX480 blocks) 2 more gpu blocks will be added when i can find two more gpu's. With dual D5's im sitting at a pretty consistant 1.3gpm across the loop, it has a rather high vertical climb because of the case size (build log on OCAU and on here as well)

  21. #21
    Mr Swiftech
    Join Date
    Dec 2002
    Location
    Long Beach, CA
    Posts
    1,561
    Quote Originally Posted by Copie View Post
    Im on here as well Gabe

    Anyways ill ask the same question as i did on there.


    Im sure one of the resident guys on here will come and point me in the right direction no doubt.

    And so far its 6 blocks (2x CPU-370's, 1x SR-2 MIPS full mobo block, 1x MIPS VRM block, 2x EK GTX480 blocks) 2 more gpu blocks will be added when i can find two more gpu's. With dual D5's im sitting at a pretty consistant 1.3gpm across the loop, it has a rather high vertical climb because of the case size (build log on OCAU and on here as well)
    Yes you are correct but I'll leave the math ratios to Stephen if he reads this.

    Are your GPU's in series or in parallel with each other?

    If they are in series, what you have is pretty similar to the example we give in the OP flow chart, except that you have 2 CPU's instead of one.

    What I'd do in your case is leave both CPU's in series and parallelize all the other satellite blocks (GPU's should be parallelized with each other, so we'll consider this a group). CPU A goes to B, simple. Then from CPU B, branch out one line to MIPS SR2 MB block, one to MIPS VR block, and one to GPUgroup. Then connect all 3 lines back to MCR Drive inlets. With two pumps in the loop, I'd venture to guesstimate that you'll double your loop Flow Rate.

  22. #22
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    Hong Kong
    Posts
    1,905
    Gabe,

    Are you fellas planning on bringing out an MCR120 Drive?
    -


    "Language cuts the grooves in which our thoughts must move" | Frank Herbert, The Santaroga Barrier
    2600K | GTX 580 SLI | Asus MIV Gene-Z | 16GB @ 1600 | Silverstone Strider 1200W Gold | Crucial C300 64 | Crucial M4 64 | Intel X25-M 160 G2 | OCZ Vertex 60 | Hitachi 2TB | WD 320

  23. #23
    Mr Swiftech
    Join Date
    Dec 2002
    Location
    Long Beach, CA
    Posts
    1,561
    Sorry, not at liberty to discuss this.
    CEO Swiftech

  24. #24
    Xtreme Member
    Join Date
    Mar 2011
    Location
    SoCal
    Posts
    268
    Quote Originally Posted by Copie View Post
    Im on here as well Gabe

    Anyways ill ask the same question as i did on there.

    Correct me if i am wrong, but in Parallel you are effectively halving the flow per block added correct?

    So for 2 blocks with 2.0gpm gets split into 1gpm, 3 blocks .067gpm and so on?
    that's only true if your three blocks have the same flow restriction. if not, then the flow in each of the three sub-lines will be different: the highest restriction sub-line will see the smallest flow rate, etc.

  25. #25
    Copie
    Guest
    Quote Originally Posted by stephenswiftech View Post
    that's only true if your three blocks have the same flow restriction. if not, then the flow in each of the three sub-lines will be different: the highest restriction sub-line will see the smallest flow rate, etc.
    All the blocks (ie the gpu blocks) are exactly the same (all EK gtx480 blocks)

    Thanks for the clarification

Page 1 of 4 1234 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •