Results 1 to 5 of 5

Thread: 100 Gbps 4x EDR IB

  1. #1
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    350

    100 Gbps 4x EDR IB

    100 Gbps 4x EDR IB



    There wasn't really a section in the forum for network bandwidth benchmarking, so I had to just put it here instead.

    You can see the results from my benchmarking my new 100 Gbps 4x EDR IB network.

    Please note that the results are in gigabits per second, which corresponds to a peak of around 12 gigaBYTES per second NETWORK transfer.

    (For your reference, a single 6 Gbps SATA SSD is good only up to 750 MB/s transfer. This would be like having 12 SATA 6 Gbps SSDs tied together in RAID0 for peak bandwidth performance and this is running on my NETWORK right now.)

    I actually had difficulties benchmarking this because even a 100 GB RAM drive wasn't able to test the peak transfer speeds because the RAM drive wasn't fast enough. (The RAM drive topped out at around ~9-ish Gbps.)

    Thanks.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  2. #2
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,982
    What were you testing to/from for this, and for what purpose? Work, or something for home...?

    Either way, hot speed
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  3. #3
    Xtreme Owner Charles Wirth's Avatar
    Join Date
    Jun 2002
    Location
    Las Vegas
    Posts
    12,292
    Insane speed, thanks for sharing
    Intel 7960X ES @ 4.4Ghz
    ASUS Rampage VI Extreme
    GTX 1080ti Galax Hall of Fame
    32GB Galax Hall of Fame
    Intel Optane
    Platimax 1350W
    Enermax 240 Liqtech AIO

    Intel 8700K @ 5.2Ghz
    Asus Maximus X Apex
    GRX 1080ti Galax Hall of Fame
    16GB Galax Hall of Fame
    Intel Optane
    Platimax 1350W
    Enermax 240 Liqtech AIO

  4. #4
    Xtreme Addict
    Join Date
    Oct 2007
    Location
    Chicago,Illinois
    Posts
    1,167
    What was the cpu config ?



  5. #5
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    350
    Quote Originally Posted by Serra View Post
    What were you testing to/from for this, and for what purpose? Work, or something for home...?

    Either way, hot speed
    This is two nodes, directly connected via 4x Enhanced Data Rate (EDR) Infiniband (IB) network interface cards (Mellanox ConnectX-4 dual port cards).

    All of my compute nodes are configured as follows:
    Supermicro 6027TR-HTRF (4 half-width, dual socket nodes) consisting of:
    2x Intel Xeon E5-2690 (v1) processors (8-core, 2.9 GHz base speed, 3.3 GHz all core turbo, 3.6 GHz max turbo, Hyperthreading disabled, 16 cores total per node)
    8x 16GB Samsung DDR3-1866 ECC Registered 2Rx4 RAM running at DDR3-1600 speeds (because that configuration isn't supported for me to actually run at the DDR3-1866 speeds, 128 GB total per node)
    1x Intel 545 Series 1 TB SSD SATA 6 Gbps
    1x HGST 3 TB HDD 7200 rpm, also on SATA 6 Gbps

    The system has a dedicated 10/100 RJ45 LAN port for IPMI, and then two RJ45 Gigabit ethernet ports as well as VGA, USB, PS/2, etc.

    Two of the nodes run SuSE Linux Enterprise Server 12 SP1, and two of the nodes are now running Windows Server 2016 (because of the Mellanox driver requirement).

    (This works out to a total of 64 cores, and 512 GB of RAM for the entire system.)

    I perform computer aided engineering (CAE) work at home (I have my own small corporation) where I perform finite element analysis (FEA) and computational fluid dynamics (CFD) runs.

    It is for work that I do from home, but I've always wanted to get into this type of hardware, so through my corporation, I am able to do so.

    Quote Originally Posted by Charles Wirth View Post
    Insane speed, thanks for sharing
    You're welcome.



    This is what it looks like trying to copy a 40 GiB file using the interface.

    I didn't have NFSoRDMA or SMB Direct set up.

    This was RAM drive to RAM drive transfer. I think that the fact that it is still registering via the Windows kernel is actually slowing it down. (At about 1.1-ish GB/s, that's about 10 Gbps, which is only 1/10th of what the interface can do.)

    With file transfers, I haven't found a good way of testing it or maximizing the transfer speeds, again, because I don't have NFSoRDMA or SMB Direct configured.

    (I'm not entirely sure that I will actually complete that setup/configuration because even at 1/10th the speed, the rest of the hardware can't keep up anyways, so it doesn't really matter that much.

    Quote Originally Posted by Hell Hound View Post
    What was the cpu config ?
    Dual Intel Xeon E5-2690 (v1).

    This is a node-to-node direct transfer.

    I'm currently running switchless right now.

    When my corporation earns a little bit more money, then I will go ahead and start looking at getting a switch, but for now, this is good.

    Most "normal" people run Gigabit ethernet at home. Most. Some might be venturing into 10GbE for home use.

    This is putting me at 100 Gbps (~12.5 GB/s), in my basement, at home.

    The rest of my backend storage infrastructure can't keep up with this.

    And I also went with this because I was using my gigabit ethernet as my node-to-node system interconnect for my work, and it was proving to be too slow (mostly because of latency, not actual line bandwidth, with only a few exceptions), and there was a Youtube video that I watched from Linus Tech Tips where they basically did the same thing, except that they didn't really seem to know what to do with it or how, etc. (this is something that's used a lot with supercomputers and high performance computing (HPC) clusters, which is very much a niche market. So if you're not used to doing this kind of work on a regular basis, then knowing about the supporting hardware infrastructure that supports this kind of work is generally unlikely.) But anyways, they were able to clue me in on the fact that you can pick up the network interface cards now from eBay for < $300 a piece (which considering the retail price is like $700, that's a bargain!), so that's what I did.

    To link my two pairs of nodes together (in a 2+2 configuration), with cables, it cost me about $1100, which again, isn't so bad. The next big expense will be the 36-port switch, which is about $5000.

    But that will allow me to link the nodes all together (rather than strictly in a 2+2 configuration) so that all four nodes can "see" each other so that if I need to throw everything I've got at a much bigger problem, I will be able to do so.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •