Results 1 to 15 of 15

Thread: Lucid Hydra vs SLI vs CFX: 13 multi-card config tested

  1. #1
    Champion
    Join Date
    Jan 2007
    Posts
    449

    Lucid Hydra vs SLI vs CFX: 13 multi-card config tested

    After managing to activate SLI on Crosshair IV Extreme I decided to study performance scaling of HydraLogix technology vs SLI vs CrossFireX. I wanted to be fair, so I used the SAME motherboard for all the testing.

    I have 13 diferent multi-card configurations:

    • HD5850 + GTX460
    • HD5870 + GTX470
    • GTX480 + HD5870
    • GTX470 + GTX460
    • GTX480 + GTX470
    • GTX480 + GTX470 + GTX460
    • 2 x GTX460 (Hydra)
    • 3 x GTX460 (Hydra)
    • 2 x GTX460 (SLI)
    • HD5870 + HD5850 (Hydra)
    • HD5870 + HD5850 (CFX)
    • 2 x HD5870 (Hydra)
    • 2 x HD5870 (CFX)


    along with each card tested in single-card mode.

    You can find the test over al Lab501: Lucid HydraLogix 200 – Anything is possible

    I hope you like it!
    Born to lose, live to win!

  2. #2
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Not to be an ungrateful pain in the ass, but this kind of testing should be done by measuring the time between frames rather than just FPS to take microstutter (if any) into account.

  3. #3
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Well, looks like Hydra is still quite pointless since CFX and SLI scale better.
    Good info, thanks.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  4. #4
    Xtreme Addict
    Join Date
    Jul 2008
    Location
    US
    Posts
    1,379
    Quote Originally Posted by zalbard View Post
    Well, looks like Hydra is still quite pointless since CFX and SLI scale better.
    Good info, thanks.
    Look at the 2x 5870 CFX vs 2x 5870 results...that doesn't appear to be the case on a consistent basis. It's scary to see a third party solution match and in two cases (both vantage presets) beat a vendor supplied solution. Horrible CFX scaling in current ATI drivers are probably at fault though.

    --Matt
    My Rig :
    Core i5 4570S - ASUS Z87I-DELUXE - 16GB Mushkin Blackline DDR3-2400 - 256GB Plextor M5 Pro Xtreme

  5. #5
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by mattkosem View Post
    Look at the 2x 5870 CFX vs 2x 5870 results...that doesn't appear to be the case on a consistent basis. It's scary to see a third party solution match and in two cases (both vantage presets) beat a vendor supplied solution. Horrible CFX scaling in current ATI drivers are probably at fault though.

    --Matt
    i think it would be awesome if he had used a 6870 for the hydra vs xfire comparison, since weve seen the scaling in 6800s rape the 5800s. although its still great review work
    2500k @ 4900mhz - Asus Maxiums IV Gene Z - Swiftech Apogee LP
    GTX 680 @ +170 (1267mhz) / +300 (3305mhz) - EK 680 FC EN/Acteal
    Swiftech MCR320 Drive @ 1300rpms - 3x GT 1850s @ 1150rpms
    XS Build Log for: My Latest Custom Case

  6. #6
    Xtreme Cruncher
    Join Date
    Apr 2006
    Posts
    3,012
    i had such hopes for Hydra and things are getting better but are still not very impressive. Vantage has good scaling considering how 2x 5870's and 2x GTX 460's are just as fast if not faster then there CFX and SLI counter parts. However that seems to be still limited to Vantage. not to mention performance can get worse in some cases. Games seem to be hit and miss, AVP seems impressive but I play more then AVP and Vantage...
    CPU: Intel Core i7 3930K @ 4.5GHz
    Mobo: Asus Rampage IV Extreme
    RAM: 32GB (8x4GB) Patriot Viper EX @ 1866mhz
    GPU: EVGA GTX Titan (1087Boost/6700Mem)
    Physx: Evga GTX 560 2GB
    Sound: Creative XFI Titanium
    Case: Modded 700D
    PSU: Corsair 1200AX (Fully Sleeved)
    Storage: 2x120GB OCZ Vertex 3's in RAID 0 + WD 600GB V-Raptor + Seagate 1TB
    Cooling: XSPC Raystorm, 2x MCP 655's, FrozenQ Warp Drive, EX360+MCR240+EX120 Rad's

  7. #7
    Xtreme Enthusiast
    Join Date
    Nov 2009
    Location
    Bloomfield Evergreen
    Posts
    607
    Maybe the CPU was a bottleneck there.

  8. #8
    Champion
    Join Date
    Jan 2007
    Posts
    449
    CPU was a bottleneck only in Resident Evil 5
    Note that in Vantage and Unigine using X-Mode (ATI+NVIDIA) I had huge stuttering problems and missing textures. AvP scaling was good and image quality was perfect, but still only 1 game?
    The drivers are the main problem because the technology is good and has potential !

    Thanks for the feedback guys!
    Born to lose, live to win!

  9. #9
    Xtreme Member
    Join Date
    Dec 2009
    Posts
    435
    Quote Originally Posted by One_Hertz View Post
    Not to be an ungrateful pain in the ass, but this kind of testing should be done by measuring the time between frames rather than just FPS to take microstutter (if any) into account.
    Standard deviation or variance would be a much more useful metric than mean time between frames to measure microstutter, actually.
    i7 920 D0 / Asus Rampage II Gene / PNY GTX480 / 3x 2GB Mushkin Redline DDR3 1600 / WD RE3 1TB / Corsair HX650 / Windows 7 64-bit

  10. #10
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by ElSel10 View Post
    Standard deviation or variance would be a much more useful metric than mean time between frames to measure microstutter, actually.
    standard deviation would only grab the total range of the data points. so if you go from 10ms to 20ms to 10 over and over, the std dev would be the same as 10 a whole bunch of times followed by 20 a whole bunch of times. but even to get that far you have to have all the data anyway, and i think the best way is to pick the min and max ms delay per second and plot it like an area chart
    2500k @ 4900mhz - Asus Maxiums IV Gene Z - Swiftech Apogee LP
    GTX 680 @ +170 (1267mhz) / +300 (3305mhz) - EK 680 FC EN/Acteal
    Swiftech MCR320 Drive @ 1300rpms - 3x GT 1850s @ 1150rpms
    XS Build Log for: My Latest Custom Case

  11. #11
    Xtreme Addict
    Join Date
    Aug 2008
    Location
    Hollywierd, CA
    Posts
    1,284
    Quote Originally Posted by Manicdan View Post
    i think it would be awesome if he had used a 6870 for the hydra vs xfire comparison, since weve seen the scaling in 6800s rape the 5800s. although its still great review work
    i don't think there are hydra drivers for the 6xxx series yet, it takes time because lucid has to make a driver for each gpu combination.

    if 3dmark is the only place where real gains can be found, that make the tech interesting, but only useable to overclockers. for gamers it seems that sli and xfire are still the best.
    [SIGPIC][/SIGPIC]

    I am an artist (EDM producer/DJ), pls check out mah stuff.

  12. #12
    Xtreme Addict
    Join Date
    Mar 2009
    Posts
    1,116
    Quote Originally Posted by Manicdan View Post
    standard deviation would only grab the total range of the data points. so if you go from 10ms to 20ms to 10 over and over, the std dev would be the same as 10 a whole bunch of times followed by 20 a whole bunch of times.
    you're talking about expressing changes in frame rates. I hadn't considered that anyone would want to measure that. it can be done though. but I wonder if this is useful information compared to the alternatives.

    lets start over from scratch

    with fraps you can measure what time every frame happens.

    you can calculate the time between each frame through subtraction.

    this short bit of time, just milliseconds, describes whether or not you see a tiny skip. you can process this into a more human readable number by converting it to frames per second. Call it the instantaneous fps equivalent (equal 1000 divided by the time difference between frames in milliseconds.)

    after one second of fraps sample time, say it produces 35 frames worth of data. so the average fps would clearly be 35. but what else would be useful to know?

    here is a sequence of instantaneous frame rates samples from a real sequence, roughly describing the original. both have an average fps of 35. {20,20,30,6,30,60,80}. I've got a simulated microsutter in there.

    so when we look at these "frame rates", what useful information other than average do we think should be pulled from it? is expressing the relationship between ups and downs going to be useful?

    maybe the reader would find it useful to see how much time is spent "down", or below ideal performance. so if ideal = 30fps, then 3/7 of the time is spent not ideal. is "42% not ideal" informative to the reader? it will tend to express "microstutters" because the whole reason we care about microstutters is that they bring performance below the normal ideal.

    what are some other ideas? here is the data plugged into wolfram alpha: http://www.wolframalpha.com/input/?i...,30,6,30,60,80

    we can express that 6fps equivalent microstutter by telling the reader about events below a lower confidence interval. so: there is 95% confidence the "fps" is above 16. one situation is below that. therefore there is one microstutter. or, 14% of the time sampled is stuttering.

    you can also give the reader one frame rate value, something like an updated "average fps". you can bias the new value a certain way. maybe people would like to know the first quartile, a way to express most of the lower frame rates. or maybe, similar to the confidence interval method, it would be useful to know the "normal low" fps, which I describe as the mean minus the standard deviation.

    it is up to the readers and the reviewers to decide what advanced data presentation they use, and what improvements they will make to their current systems. whatever they do, most of them definitely need improvements.

  13. #13
    I am Xtreme
    Join Date
    Oct 2004
    Location
    U.S.A.
    Posts
    4,743
    Quote Originally Posted by sniper_sung View Post
    Maybe the CPU was a bottleneck there.
    Yeah it can be tricky to tell with 1920x1200 and that's why I've been complaining lately when reviews don't test 2560x1600.


    Asus Z9PE-D8 WS with 64GB of registered ECC ram.|Dell 30" LCD 3008wfp:7970 video card

    LSI series raid controller
    SSDs: Crucial C300 256GB
    Standard drives: Seagate ST32000641AS & WD 1TB black
    OSes: Linux and Windows x64

  14. #14
    Xtreme Enthusiast
    Join Date
    Nov 2009
    Location
    Bloomfield Evergreen
    Posts
    607
    Quote Originally Posted by matose View Post
    CPU was a bottleneck only in Resident Evil 5
    Note that in Vantage and Unigine using X-Mode (ATI+NVIDIA) I had huge stuttering problems and missing textures. AvP scaling was good and image quality was perfect, but still only 1 game?
    The drivers are the main problem because the technology is good and has potential !

    Thanks for the feedback guys!
    It's very obvious that CPU was a bottleneck for 3DMark Vantage. The GPU score of 2 x 5870 with 10.6+ should be at least 29k and the total score should be no less than this, if it's an i7 with at least 4 cores and HT on.

  15. #15
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    wow, that must have taken a looong time... thx man
    too bad hydra still doesnt support more than 4 cards/gpus...

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •