Results 1 to 6 of 6

Thread: 100 Gbps 4x EDR IB

Hybrid View

  1. #1
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349

    100 Gbps 4x EDR IB

    100 Gbps 4x EDR IB



    There wasn't really a section in the forum for network bandwidth benchmarking, so I had to just put it here instead.

    You can see the results from my benchmarking my new 100 Gbps 4x EDR IB network.

    Please note that the results are in gigabits per second, which corresponds to a peak of around 12 gigaBYTES per second NETWORK transfer.

    (For your reference, a single 6 Gbps SATA SSD is good only up to 750 MB/s transfer. This would be like having 12 SATA 6 Gbps SSDs tied together in RAID0 for peak bandwidth performance and this is running on my NETWORK right now.)

    I actually had difficulties benchmarking this because even a 100 GB RAM drive wasn't able to test the peak transfer speeds because the RAM drive wasn't fast enough. (The RAM drive topped out at around ~9-ish Gbps.)

    Thanks.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  2. #2
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    What were you testing to/from for this, and for what purpose? Work, or something for home...?

    Either way, hot speed
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  3. #3
    Xtreme Owner Charles Wirth's Avatar
    Join Date
    Jun 2002
    Location
    Las Vegas
    Posts
    11,653
    Insane speed, thanks for sharing
    Intel 9990XE @ 5.1Ghz
    ASUS Rampage VI Extreme Omega
    GTX 2080 ti Galax Hall of Fame
    64GB Galax Hall of Fame
    Intel Optane
    Platimax 1245W

    Intel 3175X
    Asus Dominus Extreme
    GRX 1080ti Galax Hall of Fame
    96GB Patriot Steel
    Intel Optane 900P RAID

  4. #4
    Xtreme Addict
    Join Date
    Oct 2007
    Location
    Chicago,Illinois
    Posts
    1,182
    What was the cpu config ?



  5. #5
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Quote Originally Posted by Serra View Post
    What were you testing to/from for this, and for what purpose? Work, or something for home...?

    Either way, hot speed
    This is two nodes, directly connected via 4x Enhanced Data Rate (EDR) Infiniband (IB) network interface cards (Mellanox ConnectX-4 dual port cards).

    All of my compute nodes are configured as follows:
    Supermicro 6027TR-HTRF (4 half-width, dual socket nodes) consisting of:
    2x Intel Xeon E5-2690 (v1) processors (8-core, 2.9 GHz base speed, 3.3 GHz all core turbo, 3.6 GHz max turbo, Hyperthreading disabled, 16 cores total per node)
    8x 16GB Samsung DDR3-1866 ECC Registered 2Rx4 RAM running at DDR3-1600 speeds (because that configuration isn't supported for me to actually run at the DDR3-1866 speeds, 128 GB total per node)
    1x Intel 545 Series 1 TB SSD SATA 6 Gbps
    1x HGST 3 TB HDD 7200 rpm, also on SATA 6 Gbps

    The system has a dedicated 10/100 RJ45 LAN port for IPMI, and then two RJ45 Gigabit ethernet ports as well as VGA, USB, PS/2, etc.

    Two of the nodes run SuSE Linux Enterprise Server 12 SP1, and two of the nodes are now running Windows Server 2016 (because of the Mellanox driver requirement).

    (This works out to a total of 64 cores, and 512 GB of RAM for the entire system.)

    I perform computer aided engineering (CAE) work at home (I have my own small corporation) where I perform finite element analysis (FEA) and computational fluid dynamics (CFD) runs.

    It is for work that I do from home, but I've always wanted to get into this type of hardware, so through my corporation, I am able to do so.

    Quote Originally Posted by Charles Wirth View Post
    Insane speed, thanks for sharing
    You're welcome.



    This is what it looks like trying to copy a 40 GiB file using the interface.

    I didn't have NFSoRDMA or SMB Direct set up.

    This was RAM drive to RAM drive transfer. I think that the fact that it is still registering via the Windows kernel is actually slowing it down. (At about 1.1-ish GB/s, that's about 10 Gbps, which is only 1/10th of what the interface can do.)

    With file transfers, I haven't found a good way of testing it or maximizing the transfer speeds, again, because I don't have NFSoRDMA or SMB Direct configured.

    (I'm not entirely sure that I will actually complete that setup/configuration because even at 1/10th the speed, the rest of the hardware can't keep up anyways, so it doesn't really matter that much.

    Quote Originally Posted by Hell Hound View Post
    What was the cpu config ?
    Dual Intel Xeon E5-2690 (v1).

    This is a node-to-node direct transfer.

    I'm currently running switchless right now.

    When my corporation earns a little bit more money, then I will go ahead and start looking at getting a switch, but for now, this is good.

    Most "normal" people run Gigabit ethernet at home. Most. Some might be venturing into 10GbE for home use.

    This is putting me at 100 Gbps (~12.5 GB/s), in my basement, at home.

    The rest of my backend storage infrastructure can't keep up with this.

    And I also went with this because I was using my gigabit ethernet as my node-to-node system interconnect for my work, and it was proving to be too slow (mostly because of latency, not actual line bandwidth, with only a few exceptions), and there was a Youtube video that I watched from Linus Tech Tips where they basically did the same thing, except that they didn't really seem to know what to do with it or how, etc. (this is something that's used a lot with supercomputers and high performance computing (HPC) clusters, which is very much a niche market. So if you're not used to doing this kind of work on a regular basis, then knowing about the supporting hardware infrastructure that supports this kind of work is generally unlikely.) But anyways, they were able to clue me in on the fact that you can pick up the network interface cards now from eBay for < $300 a piece (which considering the retail price is like $700, that's a bargain!), so that's what I did.

    To link my two pairs of nodes together (in a 2+2 configuration), with cables, it cost me about $1100, which again, isn't so bad. The next big expense will be the 36-port switch, which is about $5000.

    But that will allow me to link the nodes all together (rather than strictly in a 2+2 configuration) so that all four nodes can "see" each other so that if I need to throw everything I've got at a much bigger problem, I will be able to do so.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  6. #6
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Just as a quick update:

    I have now also acquired a Mellanox 36-port MSB7890 36-port 4x EDR IB switch as well with a total switching capacity of 7.2 Tbps (36 ports * 0.1 Tbps per port * 2 for full duplex) = 7.2 Tbps. (or 72,000 Gbps. Just think about that. Some home installations might go up to like a 48-port gigabit switch. This single 36-port switch would, in theory, be able to support like 72000 gigabit ethernet ports. Cable management would be a colossal PITA, but yeah.)

    So now, all four nodes are able to talk to each other.


    Here is what the switch looks like (front view, albeit stock image):


    Here is what the rear looks like (upside-down):


    And here is what the insides look like:


    (Sidebar: the managed version of this switch comes with an Intel Celeron 1047UE processor (2-core, 1.4 GHz) and 4 GB of DDR3-1600 ECC Unregistered RAM in a single SODIMM.)

    So now, all four of my compute nodes can talk to each other and they're no longer bound/restricted to a 2+2 configuration, so any one node can talk to any other node directly, which is super exciting for me.

    And perhaps interestingly enough, the most expensive element for the update was actually the switch. The cards and the cables themselves weren't all that terribly expensive, consider that you can pay less, but also get SIGNIFICANTLY slower speeds as well.

    What's also interesting is that the rest of my home network is only wired gigabit ethernet (total combined 32 Gbps) which means that I can fit all of that in a single port or in a breakout cable that supports up to two 50 Gbps connections.

    In otherwords, I now have massive excessive network switching capacity - more than I LITERALLY know what to do with. (I thought about making the new IB switch, the backbone switch for my entire house, but the problem with that is for distances greater than 5 metres, I would have to start using active optical cables, and those are EXTREMELY, PROHIBITIVELY expensive.)

    And each port is capable of delivering between 96-97% of the theorectical/designed capacity, which is still RIDICULOUSLY fast.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •