Results 1 to 9 of 9

Thread: Why the Asus X79 Sabertooth?

  1. #1
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349

    Why the Asus X79 Sabertooth?

    So you know how I have been soliciting advice for build/hardware information for a LGA2011 3930K build?

    Some (a fair bit of you) have been suggesting that I should pick up the Asus X79 Sabertooth and I was just curious/wondering - why that board over all the others?

    Feedback is greatly appreciated.

    Thanks.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  2. #2
    Xtreme Enthusiast
    Join Date
    Mar 2006
    Posts
    703
    Use the x79 RIVE.. You wont regret it.
    Asus RIVE Bios 2003
    3930k 4.5 ghz @1.29v
    G-SKILLS 32gig ddr-1600 ripjaws Z
    Enermax Evo Galaxy 1250W
    2x EVGA GTX 480 Superclocked SLI @ 900/1800/2000
    X-Fi Fatal1ty Titanium PCI-E
    4 x crucial Realssd C300 256 Raid 0
    Areca 1880i
    Seagate 1TB
    CM HAF 932
    On water:
    HK 3.0
    2x MCP655
    FESER X360
    Blackice GTX 480
    DD-GTX 480 VGA blocks
    DD Reservoir
    Windows 7 64bit

    Dell 3008WFP 30"

    Help Save Lives Join World Community Grid!


  3. #3
    Xtreme Member
    Join Date
    Jan 2005
    Location
    Home
    Posts
    215
    Depending on your build size and whishes:
    RIVE ATX+

    RIV Gene uATX/Sabertooth/RIV-F and the blue-camp Asus are all on par for clocking, and differ mainly in feature`s.
    Sabertooth and Formula are the same to me, Sab is more stable 24/7, Formula clocks a little better
    REX4F - X79A-Sabertooth
    Noctua NH-D14 Pull-Pull NoiseBlocker PK3
    X3930 @4.5 - 1,325V
    X3860 @4.5 - 1,28V
    Geil Ultra+ 6x4 7-8-7-24@1814 1,65V Elpida???
    Gskill RJ-X 4x8 9-11-11 @2200 1,65V Sammy
    580 SLI




  4. #4
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Quote Originally Posted by XII View Post
    Depending on your build size and whishes:
    RIVE ATX+

    RIV Gene uATX/Sabertooth/RIV-F and the blue-camp Asus are all on par for clocking, and differ mainly in feature`s.
    Sabertooth and Formula are the same to me, Sab is more stable 24/7, Formula clocks a little better
    Well, I'm hoping to have it OC'd to somewhere between 4.5 GHz to 5 GHz using Corsair H80 and be stable while I am beating the crap out of it doing engineering work. Some of my simulations are known to run for weeks on end non-stop and I've tested systems before where a lot of the other usual programs would have indicated that it would be safe/stable, but the moment that I start making it run the full vehicle crash simulations, the system buckles under the workload/pressure.

    Features doesn't nearly matter as much since it will likely spend part of it's time either doing geometry preparations, meshing, or running the actual simulations once all of the geometry is prepped and meshed and the analysis case is setup.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  5. #5
    Xtreme Enthusiast
    Join Date
    May 2004
    Location
    AB. Canada
    Posts
    827
    you may want to consider a board that you can watercool, since running at full load will generate considerable heat,
    it won't get a chance to cool down compared to other uses when running for a few hours then getting a chance to cool down.

    my 2¢


    "Great spirits have always encountered violent opposition from mediocre minds" - (Einstein)

  6. #6
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Quote Originally Posted by MadHacker View Post
    you may want to consider a board that you can watercool, since running at full load will generate considerable heat,
    it won't get a chance to cool down compared to other uses when running for a few hours then getting a chance to cool down.

    my 2¢
    Well, in two of my other threads on here asking about OCing a 3930K to 4.5 GHz, the Sabertooth was one of the recommendations that kept popping up. I thought that it might have been because people thought that it was the easiest or most stable or something for OCing; but I don't know what boards I can cool with watercooling.

    I'm also shifting my buid idea because while I was going to save up for my 4-noded mini-cluster; I can't wait that long. So now, the system is tentatively being scheduled to be dropped into a 3U rackmount enclosure with a 650 W PSU.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  7. #7
    Xtreme Enthusiast
    Join Date
    May 2004
    Location
    AB. Canada
    Posts
    827
    one of the boards i'm leaning towards is the MSI Big Bang-XPower II
    it nicely meets 3 of my requirements.
    dual gigabit lan, 8 memory slots, and now has a full cover waterblock.
    don't like the existing heatsink anyways.


    "Great spirits have always encountered violent opposition from mediocre minds" - (Einstein)

  8. #8
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    Quote Originally Posted by alpha754293 View Post
    Well, I'm hoping to have it OC'd to somewhere between 4.5 GHz to 5 GHz using Corsair H80 and be stable while I am beating the crap out of it doing engineering work. Some of my simulations are known to run for weeks on end non-stop and I've tested systems before where a lot of the other usual programs would have indicated that it would be safe/stable, but the moment that I start making it run the full vehicle crash simulations, the system buckles under the workload/pressure.

    Features doesn't nearly matter as much since it will likely spend part of it's time either doing geometry preparations, meshing, or running the actual simulations once all of the geometry is prepped and meshed and the analysis case is setup.
    if you are running simulations for weeks you should really look at going for something that is not consumer as not having ECC could really suck. is there a reason why you are not running something with GPGPU, and have you looked at socket g34.
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  9. #9
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Quote Originally Posted by zanzabar View Post
    if you are running simulations for weeks you should really look at going for something that is not consumer as not having ECC could really suck. is there a reason why you are not running something with GPGPU, and have you looked at socket g34.
    I'll address the questions in a numbered list:

    1) The non-consumer option that I've speced out starts at $6500 and tops out at $22000. Unfortunately, while I can slowly save for that; it doesn't help to address the immediate computing needs. My old compute node tops out at 16 GB of RAM and takes about a week per engine CFD run; if it goes through. (So if I am trying say different meshing strategies in order to try and find what would be the best meshing strategy - it will take a really long time.) Two of my older workstations - one only has 8 GB of RAM (which is my next fastest) and the next fastest after that has 16 GB; but is only a dual single-core Opteron system. Can I upgrade the memory for some of those systems? Yes. But is it worth the $400-500 that it'll cost for me to do so? Mehhh... (One's still DDR(1)-400 and the other is DDR2-667; both have ECC.)

    2) Not all of my programs run on GPGPU. Actually, come to think of it; I don't think ANY of them do. Most of the problems will likely exceed what the VRAM is, and if that happens, it'll fall back onto the CPU anyways; so it won't get accelerated and from what I've been told, they DON'T swap to RAM. (pity). Although I've also been told that if you can't fit the matrix in VRAM for say the matrix inversion process, then you'll throw all kinds of errors when it tries to. I'm not a programmer, but I would reckon that in order for you to be able to do a direct matrix inversion OUT of RAM; it'd be a huge PITA to code up because you'd have to interrupt all of the matrix ops with memory read/write/I/O ops.

    And when you're doing a full vehicle crash model (which I am expecting the raw data going into the run is probably going to end up between 2-3 GB); that's upwards of 50% of the VRAM capacity of the most powerful GPGPU card. (The run will likely take upwards of ten TIMES that amount of memory just to run). And because this is commercial code; so as far as I know, you can't parallelize across GPGPU cards. At least not yet. Or not that I know of. (I think that they are trying that stuff out, but it's not as easy or as simple as the GPGPU makers would like to make you believe it is.)

    For academia and research labs - you can kinda do whatever you want. But for commercial code; it's harder. And even if you could; most company's computing infrastructure is quite heterogeneous; so you can never predict or know what to expect that's going to be ultimately running your software. I would LOVE it if the crash sims were GPGPU acccelerated. But I think that there are practical limits in terms of how much/how far it can go. And in automotive simulations, we quite literally just crap data. It's data diarrhea. A simple structural analysis on a simple truck frame will generate over 10 GB of data during the course of the run. I think that one of the covers that I was working on for alternative fuels was also generating several GB of data.

    And on my 8 GB daily driver, trying to just bring TWO of the vehicle systems into the CAD session crashes the CAD session (out of RAM).

    3) So in order to bridge between the future, while being able to still continue to work on it and develop and grow and serve the immediate computing needs, I'm starting to look into getting a 3930K with 64 GB of RAM that'll be OC'd to 4.5-5 GHz using the H80 (if I can get away with going that fast, and having it stable running 24/7).

    I think that the last time I tried that; I got it up to 4.5 GHz, ran IBT and a few of the "regular" programs for stability checks, and thought that it was good, so I started submitting the crash sim runs, and the system crashed; so I ended up bumping the voltages again like thrice; just to get it to a point where it was stable enough to be able to survive the crash sim run. And it wasn't anything fancy either. It was just a benchmark crash sim run too.

    http://www.youtube.com/watch?v=3ppuDedRZDM
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •