Page 1 of 3 123 LastLast
Results 1 to 25 of 52

Thread: 3x 30" Portrait Crossfire Eyefinity vs SLI Surround Showdown

  1. #1
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626

    3x 30" Portrait Crossfire Eyefinity vs SLI Surround Showdown

    UPDATE to include my trials with TRI/QUAD 6970s and TRI 3GB 580s:

    This thread original started as a dual 6970 vs 580 comparison. Since two each of those cards did not get me where I wanted to be FPS wise, it has somewhat "evolved". My system changed from a i7 920 D0 @ 4.55Ghz on X58 (16x PCI-e for 2 GPU's) to a 2600K on a Asus P8P67 WS Revolution that can accommodate Quad-Fire and Tri-SLI (8X PCI-e on all lanes when using 3 or more GPUs). I've kept the 2600K at 4.6Ghz as to keep the playing field as level as possible during the platform switch. Apples to apples, I've found I've lost approximately 2-5% performance switching from pure 16x X58 lanes to 8x P67 lanes with a NF200 tossed in there.

    I immediately ran into problem while going Tri/Quad-fire 6970s. Everything was crashing left and right. Dug up some research that stated the crossfire interconnects where never designed to push the bandwidth at the resolutions I was pushing them at. In my opinion the series design of the crossfire interconnects is flawed. The bridge from card one to two and then on down the line has to handle way too much bandwidth in Tri/Quad-Fire setups at this high of a resolution and just couldn't cope. At about the time I was finding all of this out, 3GB 580's were on the horizon and the reason for my update here.

    My largest complaints originally with the 580s was the severe lack of VRAM and the DX9 portrait mode FPS limit bug. Those were total deal killers. I am glad to report that both are solved with these latest cards and drivers! I've included a few Tri-6970 benchmarks that did run but deleted the Quad-6970 line as everything besides Resident Evil 5 crashed (RE5 also allowed the sole Quad-Xfire benchmark). The Palit 3GB 580's where modestly clocked to 840/2050. Afterburner 2.1.0 Beta 7 can adjust voltages on these non-reference cards. All benchmarks are with driver texture quality settings at maximum, 16x AF forced with 2x MSAA.





    Heaven 2.1 - You can see a third 580 and the 3GB VRAM really lets the benchmark breathe. It is getting over twice the FPS as two standard 580s. Tri/Quad 6970s couldn't even run the benchmark.

    Resident Evil 5 - I don't know if this game is particularly tweaked for AMD, but this is the one game where the 6970s came out on top.

    Eve - They just did a recent client update and changed the graphics settings etc. So Eve could not be used as a comparison anymore.

    DCS:A10C Beta 4 - This is really what I was hoping was going to happen. A-10C was VRAM limited not only on the stock 580 but the 6970s also as it used tons of VRAM for some reason. Here the 3GB just let the sim open wide up and breathe. 82 FPS at this resolution on such a demanding simulator. And my 3x 3GB 580s where still all at 100% usage! Room to grow (wink). I am completely flabbergasted by these results.

    AvP DX11 - Here you can see even a sweet 63% performance gain going from two regular 580s to the 3GB Tri setup. Incredible gains.

    Hawx 2 - Even though the 96 average FPS is the highest recording in portrait mode out of all the setups, I did find an anomaly. Viewing the Fraps counter, it was pegged at 101/102 virtually the entire time. Checking Afterburner, the GPU usage was only at about 60% across the board. Something artificially was limiting it, and it wasn't the CPU. (To be continued below).

    BF:BC2 - 97 FPS. The numbers just speak for themselves, just amazing. This mystical 101/102 FPS bug appeared once again.

    WoW- Sorry account expired.

    Rise of Flight - I am not sure what to make of this one. They just did a recent update and it appears how SLI works has been altered for the worse. I was only getting 30% usage out of the GPU's. I have a post on their forums to see if I theres a way I can get SLI working properly.

    Left 4 Dead 2 - Although the Tri-SLI numbers are the highest, they are only about 10% faster than the dual-6970 numbers. If Tri-6970 didn't crash, i think it easily would have taken this benchmark.

    Batman - A silky smooth 62 FPS with all settings maxed and 2x AA.

    Metro - Here the Tri-6970s actually did work, but as you can see the 3GB 580s decimated everything. 60 FPS average at 12.3 Mega-pixels!

    MS:FSX: Really didn't show much improvement over 2x regular 580s but this is a notorious CPU limited sim. Still quite a bit better showing versus AMD.

    Lost Planet 2: Once again a great showing for the 3GB 580s.


    Some random thoughts about the 3GB frame buffer viewing usage with MSI Afterburner 2.1.0 Beta 7. They have to update the graph limit as they stop at 2048MB lol. Although the number actually shows correct. Virtually all of these games tested even with just 2x AA blew through the 1536 limit on the standard 580's. DCS A-10C used 2200MB, Heaven 2.1 with 8x AA used 2700MB and Eve online at 16x AA used 2800MB VRAM. You see that even with 6970s 2GB with more than 2x AA you run into problems.

    I think the 3GB also helped in various other ways. It's just my hunch that the Afterburner displayed memory usage is just game textures, AA and the like and doesn't take into account VRAM usage for triple buffering and SLI needs. Even with using an 8x/8x/8x PCI-e setup with the P67, these numbers are pretty darn impressive. I didn't notice any micro-stuttering, lag, pausing, hitching or the like. Everything was silky smooth and I attribute a lot of that to the VRAM quantity. I was also worried that adding a 3rd card would add noticeable input lag. It appeared to operate just as fast as the single and two card setups which is nice. Something I cannot say for the Tri-Quad 6970 setup which at times exhibited strange mouse/input lag in some tests. Whether that was due in part to the limited crossfire interconnects I do not know.

    Mini-review of the Palit cards: I was at first suspect of this company Palit as I've never even heard of them before. I've heard of Gainward and originally about 3GB 580's through them. Gainward does use a nice cooler setup, but it's a 2.5 slot heat pipe design which limits multi-GPU use scenarios. The Palit uses a 2-fan 2.0 slot heat pipe cooler. The colors are a bit unusual when viewed online but in person they aren't that bad. The black is a nice polished piano black and the orange is darker than in the pictures. The build quality seems pretty good. All three work out of the box over clocked right out of the box so no complaints here.

    The nice thing about them is the sound level. Check out the db recordings in the chart. Just to put this into perspective, the 2x 6970s at max load/fan are over 50% louder than the 3x Palit cards at max load/fan! Need I say more? One Caveat though is that I had to put an extra fan (high quality quiet scythe) fan to blow down on the 3x Palit cards to help with temperature on my open bench. The 3x rigid SLI connector puts the 3x Palit cards real close together limiting airflow. The good news is that EK is making water blocks for the Palit/Gainward 5xx series GPUs which is great. Without the scythe fans blowing down on the cards, the top (#1) card would push 97 C. With the extra fan max temps are around 80 C. The last card that can breathe freely see's max temps of around 67 C under these over clock conditions.

    All-in-all I am highly impressed with what Palit has accomplished with these cards. I know designing a high quality PCB from scratch and custom coolers isn't left to the unskilled. First to market with them to boot. I did have some issues with nVidia's latest drivers in which you can find what I did to work around it on around page 23 of this thread.

    The mysterious 101/102 FPS limit doesn't really concern me much. The old 39/40 FPS portrait bug was really annoying because obviously that's too low of a number to settle with. As long as I am at 60 FPS to match my monitors refresh rate I am happy. At this time I am not going to try to hunt down how, why and under what conditions that limit occurs as obviously id doesn't affect my game play, only benchmarks at stupidly high FPS.

    Another thing to note is that even though the pixels "hidden" behind the bezels while using bezel correction aren't displayed, they do create a small performance hit. I am not sure if the driver just leaves color assignment to those pixels blank or assigns them all black, but the performance impact is fairly small (tested less than 5%).

    The purpose of this thread is not to say nVidia is better than AMD or that AMD is better than nVidia. I think brand loyalty is stupid. Whomever can come out with the best product for my money wins. If AMD can come up with a better solution for my needs during the next card refresh, I'll be right back to pit them head to head.

    On a non-GPU related note I have not been particular impressed with this Asus P67 offering and am considering going back to X58 until 2011 arrives. My current mode of thought is a soon to be released Gigabyte X58A-OC "Orange' board that will allow 4x 3GB 580 usage under liquid cooling with 2x NF 200s via quad-electrical 16x PCI-E. Toss in a 990X at around 5.5Ghz or so in combination with my sub-zero liquid cooling build and I think I can call it a day.







    Original Test setup (this info is reference only and may be outdated by information located above:

    580s SLI lightly OC'd 800/1600 running driver 263.09. 6970s CF lightly OC'd 950/1420 running driver 10.12a.

    All Benchmarks are bezel corrected (besides 5760x1200) and have 2x AA. ZR30Ws have their plastic bezels removed and monitors clamped together to get the smallest vertical gap between screens.

    i7 920 D0 @ 4.55 GHz
    6GB DDR3
    Intel SSD
    Asus Xonar Essence STX Sound
    3x 30" HP ZR30w's in Portrait





    I don't have any fancy graphics for the benchmark numbers, so a spreadsheet view will have to do. Going from left to right, the numbers would mean average / min / max. ^ = nVidia DX9 AFR portrait FPS limit bug of 39/40. More on that later.
    All in game settings are maxed unless otherwise stated. AF16x is forced in both drivers as well as disabling "optimization" driver settings which only gain you 1-2% performance anyways. Sorry 5760x1200 users, the 580 benchmarks went flawlessly as you can natively select that resolution in Surround but AMD wouldn't let me set that no matter what I did. Only Heaven 2.1 custom resolution worked. Manually editing the other games files didn't work and gave D3D errors etc. I tried registry edits to get 1920x1200, but that wouldn't work in Eyefinity. The last resort was going to edit EDID files but that's a bit involved and not worth it for a resolution I will never use. Blame AMD's wonderful driver team for not allowing a basic 1920x1200 16:10 resolution on a 16:10 30", wtf?




    * = 580 memory limit reached as viewed per MSI Afterburner.

    The breakdown:

    Heaven 2.1 DX 11 - 6970 CF comes out above the 580s here. The 580s run into memory problems. But even at the lower resolution where memory isn't as big of an issue, the 6970s still win out. Crossfire scaling is around 85%.

    Resident Evil 5 - 6970s win by a fair margin.

    Eve Online - Even though SLI does work in Windowed mode, I pulled in a lower number with the 580s than I did with the 6970s in which Windowed mode only works single card. Win 6970.

    DCS:A10C Beta / Medium in game settings - Mixed results here, the 580s ran into serious memory issues.

    Aliens vs. Predators DX11 benchmark - Clear winner for the 580s.

    Hawx 2 benchmark - In landscape you can see the 580s trump the 6970s. The 6970s win in portrait mode because of the DX9 AFR portrait bug on the 580s.

    Battlefield: Bad Company 2 - One of my favorite games. The 580s win here but just slightly. You can see 95+% crossfire scaling.

    World of Warcraft DX 11 mode - Clear winner in the 6970s.

    Rise of Flight - Favor to the 6970s.

    Left 4 Dead 2 - 6970s win.

    Batman - Guess what 580s? DX9 AFR bug, yet again. The 6970s do well with virtually 100% crossfire scaling, yet if the 580s portrait bug wasn't there I believe it would have won out in portrait mode just like it did in landscape.

    Metro 2033 - The 580s just edge out the 6970s here in portrait mode.

    MS: FSX DX10 - Crossfire didn't seem to be kicking in here much for some reason, giving the nod to the 580s.

    Lost Planet 2 - Favor: 580s.

    There's no way around the nVidia Surround DX9 AFR portrait mode FPS limit bug that I could find. You also cannot enable triple buffering via D3DOverider in AFR, only SFR mode. That poses a problem because in order to eliminate screen tearing, you need VSync. In order to get VSync without tremendous input lag you need triple buffering. You see where that is going.

    Without Vsync on, both the nVidia and AMD camps both get horrible screen tearing. This occurs no matter the FPS. Above 60, below 60, it doesn't matter. Some games automatically enforce triple buffering to limit the input lag, but for the ones that don't you need D3DOverider/ATI Tray Tools etc.

    Ease of setup - The 580s were much less painful to setup and get running in SLI. Run 3x DVI-D cables. The displays just "work" like they are suppose to with DVI-D, even with the 3x 25 foot cables I use to get the signals from one room, through the wall into the computer area. Not having to deal with the sound and heat of a computer in your immediate area is so sweet. Install the drivers, setup the Surround resolution, bam your done. The 580s also over clock real easy in SLI and didn't produce many problems. AMD has a bad driver stigma because their drivers, well suck!
    Recap:

    580s would be much more competitive in DX9 portrait games if nVidia would sort their drivers and fix the FPS limit bug. Too bad most games are still DX9 as the bug doesn't exist when using DX10/11 as you can see in BF:BC2 numbers. If nVidia doesn't fix their drivers, as more games leave DX9 behind this will become less important.
    580s 1.5GB memory limits its performance at such a high resolution. At 5760x1200 and below it isn't as much of a factor.
    SLI works in windowed mode, CF does not.
    SLI works in virtually every game, CF does not in a select few.
    2GB VRAM means 6970's can run the 3x 30" resolutions at 4x AA +.
    Crossfire scaling is quite good. 90-100% gains normal by adding a second card.
    Display port sucks.
    Noise levels and heat output are similar for both setups. At 100% GPU load in my super air cooled case the GPU temps average 65-70 deg C max for both setups. The 6970s are noticeably louder and I confirmed this with using a dB meter.
    In order to save a nickel, AMD decided to make one of the DVI ports single link only!
    Vsync on 580s Surround is perfect using 3x DVI-D cables. Vsync on with 6970s present a large problem in Eyefinity as you are mixing Display port/DVI-D signals and a single permanent obvious screen tear will always be present. The only way around this is to wait for "Eyefinity 6" edition 6970s or get the 3-DP 6990 when they are released.

    Major pitfalls with 580's 3x 30" Surround: 1.5GB VRAM, DX9 Portrait AFR FPS limit.
    Major pitfalls with 6970's 3x 30" Eyefinity: Display port, mixed display connectivity screen tear, AMD drivers.

    I have been thinking of whether or not trying out the 6990 cards in CF for four GPU's if they launch with 4 GB (2GB frame buffers). This does pose some problems. Quad CF is notorious for having issues and increasing input lag because of all of the buffer flipping. Single GPU games like Rise of Flight and Eve windowed performance would actually decrease as you would only be using one of the four GPUs which would be at a lower clock. Some games like DCS-A10C I'd run into CPU limiting way before 4 GPUs are pushed to max. Besides Batman, with the 6970 CF setup I can run minimum 60 FPS already with some room to breathe and lock the FPS there. I don't even think a GTX 595 would compete unless they came out with 6GB cards. Yes, 6GB would be needed (3GB frame buffer) per card or it would just crash like the 580's when it runs out of VRAM.

    If anyone has any good ideas on how to get the "custom" 5760x1200 resolution to work with the 6970's, I am all ears and would finish the benchmarks for that resolution.

    I will try and update these benchmarks when new drivers are released by both sides.

    Please let me know if you have any questions.
    Last edited by Callsign_Vega; 02-07-2011 at 11:34 AM.

  2. #2
    Registered User
    Join Date
    Oct 2004
    Location
    france
    Posts
    98
    great!

  3. #3
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Interesting info.
    Looks like 2GB Radeons were not designed to power 3x 30 inch displays. And 3GB cards do much better than 1.5GB ones in surround scenarios, as expected.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  4. #4
    Xtreme Member
    Join Date
    Jun 2007
    Location
    Bay Area
    Posts
    164
    thx for the info. does portrait mode bug only exist in dx9? also, did they solder the extra memory on the backside of the cards? and is there additional backplate or something to help remove heat there? or did they use double density chips? thanks!
    1. INTEL E5200 M0 (200x12.5 @1.175), Abit IP-35Pro, 2x2GB DDR2-1000, GT240
    2. INTEL i7-920 C0 (200x19 @1.275), Gigabyte EX58-UD4P, 4x4GB DDR3-1333, GTX680 FTW+ 4GB


    heat

  5. #5
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626
    Quote Originally Posted by darckhart View Post
    thx for the info. does portrait mode bug only exist in dx9? also, did they solder the extra memory on the backside of the cards? and is there additional backplate or something to help remove heat there? or did they use double density chips? thanks!
    Double density memory chips. No backplates on these cards unfortunately. The DX9 portrait bug was fixed in these latest drivers.

  6. #6
    Registered User
    Join Date
    Feb 2010
    Posts
    81
    Hi
    Can you run a unigine with the same settings with 3x580 to see the difference ?
    Thanks in advance

    3x6950 with unlock shaders run at default clock of 6970 880/1375
    Asus M5G / i7 3770K / Corsair H100 / G.SKILL TridentX 2400MHz / HD7970 / SSD Intel Postville 2x80GB RAID0 / HDD 2x1TB Samsung HD103UJ / PSU TT 1200W / Dell 3xU2311 ef / TJ07

  7. #7
    Would you like some Pie?
    Join Date
    Nov 2009
    Posts
    269
    Just curious how come I can run 2560x1440 with 2x SSAA in everything I play and I don't get crashes... Thats equivalent to a resolution of 5120x2880 is it not? I should be running out of interconnect bandwidth according to your results.
    Xeon W3520 @ 4.0Ghz ~ 3x 7970 ~ 12GB DDR3 ~ Dell U2711

  8. #8
    Xtreme Addict
    Join Date
    Jun 2006
    Location
    Florida
    Posts
    1,005
    I did a similar review but in landscape at 7920x1600 over at EVGA forums:

    http://www.evga.com/forums/tm.aspx?m=833917
    Core i7 3770K
    EVGA GTX780 + Surround
    EVGA GTX670
    EVGA Z77 FTW
    8GB (2x4GB) G.Skill 1600Mhz DDR3
    Ultra X3 1000w PSU PSU
    Windows 7 Pro 64bit
    Thermaltake Xaser VI

  9. #9
    Would you like some Pie?
    Join Date
    Nov 2009
    Posts
    269
    Maybe someone can do the math to see how much bandwidth is actually being put through the interconnect at that res?

    Xeon W3520 @ 4.0Ghz ~ 3x 7970 ~ 12GB DDR3 ~ Dell U2711

  10. #10
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626
    Quote Originally Posted by zpaf View Post
    Hi
    Can you run a unigine with the same settings with 3x580 to see the difference ?
    Thanks in advance

    3x6950 with unlock shaders run at default clock of 6970 880/1375
    It won't let me run landscape resolutions while in portrait. Normally it would be a quick switch but if you read my original post, with the bugged nVidia drivers it would take me about two hours to change lol.

  11. #11
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626
    Quote Originally Posted by Roadhog View Post
    Just curious how come I can run 2560x1440 with 2x SSAA in everything I play and I don't get crashes... Thats equivalent to a resolution of 5120x2880 is it not? I should be running out of interconnect bandwidth according to your results.
    Where do you get the idea that 2560x1440 with 2x SSAA is even anything remotely as demanding as 5080x2560 with 2x MSAA?

    To put that into perspective, you are running 29.9% of the resolution I am.
    Last edited by Callsign_Vega; 02-06-2011 at 02:26 PM.

  12. #12
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626
    Quote Originally Posted by Roadhog View Post
    Maybe someone can do the math to see how much bandwidth is actually being put through the interconnect at that res?

    Hmm, interesting. If someone was bored and could do the numbers that would be sweet. I notice on SLI zone it says the SLI interconnect does 1GB/sec. Not much more than crossfire but SLI is paralleled versus crossfire is straight series. Paralleled obviously being the better method in virtually all applications.

  13. #13
    Would you like some Pie?
    Join Date
    Nov 2009
    Posts
    269
    Quote Originally Posted by Callsign_Vega View Post
    Where do you get the idea that 2560x1440 with 2x SSAA is even anything remotely as demanding as 5080x2560 with 2x MSAA?

    To put that into perspective, you are running 29.9% of the resolution I am.
    If I'm not mistaking, supersampling renders the frame at 2x, 4x, 8x and so on, then down samples it the same amount...

    So, If I run 2x SSAA, then I am actually rendering a frame of 5120x2880. It should take up the same memory imprint and bandwidth.
    Xeon W3520 @ 4.0Ghz ~ 3x 7970 ~ 12GB DDR3 ~ Dell U2711

  14. #14
    Would you like some Pie?
    Join Date
    Nov 2009
    Posts
    269
    Quote Originally Posted by Callsign_Vega View Post
    Hmm, interesting. If someone was bored and could do the numbers that would be sweet. I notice on SLI zone it says the SLI interconnect does 1GB/sec. Not much more than crossfire but SLI is paralleled versus crossfire is straight series. Paralleled obviously being the better method in virtually all applications.
    Says otherwise here:
    http://en.wikipedia.org/wiki/Radeon_R700

    According to that the port is full duplex.

    Current generation of dual-GPU design also features an interconnect for inter-GPU communications through the implementation of a CrossFire X SidePort on each GPU, giving extra 5 GB/s full-duplex inter-GPU bandwidth.
    Xeon W3520 @ 4.0Ghz ~ 3x 7970 ~ 12GB DDR3 ~ Dell U2711

  15. #15
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626
    Quote Originally Posted by Roadhog View Post
    If I'm not mistaking, supersampling renders the frame at 2x, 4x, 8x and so on, then down samples it the same amount...

    So, If I run 2x SSAA, then I am actually rendering a frame of 5120x2880. It should take up the same memory imprint and bandwidth.
    Thats comparing apples to oranges. In my testing SSAA doesn't use nearly as much bandwidth as you are describing. I basically don't use it because it just makes everything blurry.

    Quote Originally Posted by Roadhog View Post
    Says otherwise here:
    http://en.wikipedia.org/wiki/Radeon_R700

    According to that the port is full duplex.
    The crossfire bridge in which all AMD cards use today is only 900mbits/sec as on the chart (top). There is no such thing as a side-port on the 69xx series.

    Of course it is duplex, but the cards are only connected in serial so card #1 has to go through card #2 to talk to card #3.

    In SLI you can see cards can talk directly to each other and not eat up bandwidth. Card #1 can talk directly to card #3 without interfering with the bandwidth of card #1 talking to card #2.

    Last edited by Callsign_Vega; 02-06-2011 at 09:04 PM.

  16. #16
    Xtreme Addict
    Join Date
    Mar 2005
    Posts
    1,122
    So, why doesnt AMD come out with a bridge similar to what the SLI setup uses?? Wouldnt that solve the bandwidth issues, if you were using both the connections per card???
    X299X Aorus Master
    I9 10920x
    32gb Crucial Ballistix DDR4-4000
    EVGA 2070 Super x2
    Samsung 960 EVO 500GB
    4 512gb Silicon Power NVME
    4 480 Adata SSD
    2 1tb HGST 7200rpm 2.5 drives
    X-Fi Titanium
    1200 watt Lepa
    Custom water-cooled View 51TG



  17. #17
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626
    Quote Originally Posted by screwtech02 View Post
    So, why doesnt AMD come out with a bridge similar to what the SLI setup uses?? Wouldnt that solve the bandwidth issues, if you were using both the connections per card???
    AMD basically have come out and said they dropped the ball when designing the current crossfire bridge.

    Hopefully with the next card refresh down the road they completely redesign it. I wouldn't be surprised if the penny pincher's win out though seeing as the crossfire bridge issues only affect the extreme users.

  18. #18
    World Champion - IRONMODS
    Join Date
    Sep 2007
    Location
    Northern Japan
    Posts
    2,029
    Quote Originally Posted by Callsign_Vega View Post
    AMD basically have come out and said they dropped the ball when designing the current crossfire bridge.
    Do you have a link?
    Quote Originally Posted by Massman
    My definition of 'efficient' is 'it does not suck monkeyballs'. Yes, I set bars low.
    [CENTER]The post counter is not an intelligence meter!

    MAX11L - "It's like a console...with the suck turned down and the awesome turned up" -tet5uo
    Heat Team IRONMODS

  19. #19
    Would you like some Pie?
    Join Date
    Nov 2009
    Posts
    269
    Yeah, I don't believe that the bridge is causing any of the problems. It has to be a driver issue.
    Xeon W3520 @ 4.0Ghz ~ 3x 7970 ~ 12GB DDR3 ~ Dell U2711

  20. #20
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    The thing about CFX, it does not have master/slave cards. SLI does.
    It helps CFX scaling and reduces the amount of info transferred between cards.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  21. #21
    Would you like some Pie?
    Join Date
    Nov 2009
    Posts
    269
    You should try 11.1a for ATI. 10.12a barely supports the cards. Also the latest for nVidia too.
    Xeon W3520 @ 4.0Ghz ~ 3x 7970 ~ 12GB DDR3 ~ Dell U2711

  22. #22
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    i was also wondering if the compatibility (like crashes) can be avoided for xfire by using a AMD based motherboard. surely it wont be the same comparison with CPUs being so different, but it might offer interesting results
    2500k @ 4900mhz - Asus Maxiums IV Gene Z - Swiftech Apogee LP
    GTX 680 @ +170 (1267mhz) / +300 (3305mhz) - EK 680 FC EN/Acteal
    Swiftech MCR320 Drive @ 1300rpms - 3x GT 1850s @ 1150rpms
    XS Build Log for: My Latest Custom Case

  23. #23
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626
    Afterburner 2.1.0 Beta 7 can adjust voltages on these non-reference cards.

  24. #24
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626
    Quote Originally Posted by miahallen View Post
    Do you have a link?
    "Generating that many pixels at the right quality levels would tax any single graphics chip. Making CrossFire work on this scale presents some challenges, however, as AMD readily admits. The core issue is the fact that the dedicated CrossFire interconnect used for passing completed frames between cards has "only" enough bandwidth to sustain a 2560x1600 display resolution. Even three 1080p displays will exceed its capacity. The alternative is to transfer frame buffer data via PCI Express, which is what AMD does when necessary. Using PCIe works, but it can limit performance scaling somewhat—don't expect to see the near-linear scaling one might get out of a dual-card setup in the right game with a single display. That's not to say mixing CrossFire with Eyefinity won't be worth doing. Based on AMD's performance estimates, frame rates could improve between about 25% and 75% when adding a second GPU with a 12-megapixel, six-monitor array."

    http://techreport.com/articles.x/18521/2

    There's also a Hardforum review of 5870s crossfire at super high resolutions in which AMD told them to put both crossfire bridges on to eliminate bandwidth problems. In three and four way crossfire you can obviously only use one interconnect per card.

    There's more to be found Googling.

    If I ran single 30", Tri-Quad crossfire would work fine. I also tried all versions of drivers since the 69xx series was released. On top of that, If I simply lowered the Eyefinity resolution to a certain point, the Tri-Quad crossfire would magically start working again. I found tests that the crossfire bridge was one of the main issues with 2x 5970's not working properly quad-fire. Also found references by AMD that said the crossfire bridge wasn't even designed to handle multi-display bandwidth, let alone resolutions this extreme.

    I think all of this clearly points to a crossfire bridge limitation.

  25. #25
    Would you like some Pie?
    Join Date
    Nov 2009
    Posts
    269
    Well thats a pretty crappy deal. :|
    Xeon W3520 @ 4.0Ghz ~ 3x 7970 ~ 12GB DDR3 ~ Dell U2711

Page 1 of 3 123 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •