Page 2 of 9 FirstFirst 12345 ... LastLast
Results 26 to 50 of 225

Thread: Show Off Your Server!

  1. #26
    Registered User
    Join Date
    Aug 2012
    Posts
    70
    Quote Originally Posted by Buckeye View Post
    Looks at my rack here... man I need to fill this puppy up with stuff like this
    A cost effective way to fill it up
    http://www.servethehome.com/Server-d...ode-8-sockets/
    rgds,
    Andy

  2. #27
    Xtreme Enthusiast
    Join Date
    Mar 2010
    Location
    Irvine, CA
    Posts
    516
    Quote Originally Posted by Buckeye View Post
    Very nice Cook !

    Your down in Irvine I see. I have a client down there... TGS, they are running 40,000+ cores last time I was in there.

    Looks at my rack here... man I need to fill this puppy up with stuff like this
    Yup, I am indeed. Was planning to provision the gear in a datacenter in Irvine, but the cost of bandwidth there and lack of 10G and transit options swayed us away. We currently have a 10G uplink per rack for our use and is it fun to play with. If only I can get my own dwdm gear setup from LA to Irvine =)
    Project Elegant Dreams
    CPU: Intel 980x (3005F803T) @ Stock , Mobo: Rampage III Extreme, Ram: 24GB Corsair Vengeance 1600 C9 1T 1.51v GPU: SLI EK'd Vanilla GTX580, 0.875v @ idle.
    Sound:Asus Xonar Xense w/Sennheiser PC350
    SSD: 2x OCZ Agility 3 120GB RAID-0 on ICH10 Storage: 2x1.5TB Hitachi 5K3000, 1.5TB WD Black, 1TB F3, various other HDD's Case: Obsidian 800D, PSU: Corsair AX1200,


    Water Cooling
    Main Loop
    EK Supreme HF
    EK GTX480 blocks on EVGA 580s
    MCP355 and MCP35X in serial
    Dual BlackIce GTX480 Rads

  3. #28
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,568
    Quote Originally Posted by Andreas View Post
    A cost effective way to fill it up
    http://www.servethehome.com/Server-d...ode-8-sockets/
    rgds,
    Andy
    Andy your killing me man, I would love a bunch of those LOL

    Dang those are very nice prices tho !

    Quote Originally Posted by Cookiesowns View Post
    Yup, I am indeed. Was planning to provision the gear in a datacenter in Irvine, but the cost of bandwidth there and lack of 10G and transit options swayed us away. We currently have a 10G uplink per rack for our use and is it fun to play with. If only I can get my own dwdm gear setup from LA to Irvine =)
    Yeah I am not sure how they do it connection wise, but their server setup is very impressive. It's all their's, no hosted space.
    Last edited by Buckeye; 02-21-2013 at 01:13 PM.

  4. #29
    Xtreme Enthusiast
    Join Date
    Mar 2010
    Location
    Irvine, CA
    Posts
    516
    Quote Originally Posted by Buckeye View Post
    Andy your killing me man, I would love a bunch of those LOL

    Dang those are very nice prices tho !



    Yeah I am not sure how they do it connection wise, but their server setup is very impressive. It's all their's, no hosted space.
    Do you have more information about "TGS"? Would love to look into them!
    Project Elegant Dreams
    CPU: Intel 980x (3005F803T) @ Stock , Mobo: Rampage III Extreme, Ram: 24GB Corsair Vengeance 1600 C9 1T 1.51v GPU: SLI EK'd Vanilla GTX580, 0.875v @ idle.
    Sound:Asus Xonar Xense w/Sennheiser PC350
    SSD: 2x OCZ Agility 3 120GB RAID-0 on ICH10 Storage: 2x1.5TB Hitachi 5K3000, 1.5TB WD Black, 1TB F3, various other HDD's Case: Obsidian 800D, PSU: Corsair AX1200,


    Water Cooling
    Main Loop
    EK Supreme HF
    EK GTX480 blocks on EVGA 580s
    MCP355 and MCP35X in serial
    Dual BlackIce GTX480 Rads

  5. #30
    Wuf
    Join Date
    Jul 2007
    Location
    Finland/Tampere
    Posts
    2,400
    Quote Originally Posted by Andreas View Post
    A cost effective way to fill it up
    http://www.servethehome.com/Server-d...ode-8-sockets/
    rgds,
    Andy
    Bought one with 2 nodes each holding dual L5520's and 48Gigs of ram. New ESXi nodes
    Soon as i get the server I'll post pictures from my rack cabin.
    Goin to hold:
    HP1800-24G
    Dell2848
    Dell C6100 XS23-TY3
    Custom ZFS server in Supermicro 823
    Dell PowerVault MD1000
    HP Storageworks MSA70
    5000VA APC Rack UPS
    You use IRC and Crunch in Xs WCG team? Join #xs.wcg @ Quakenet
    [22:53:09] [@Jaco-XS] i'm gonna overclock this damn box!
    Ze gear:
    Main rig: W3520 + 12GB ddr3 + Gigabyte X58A-UD3R rev2.0! + HD7970 + HD6350 DMS59 + HX520 + 2x X25-E 32gig R0 + Bunch of HDDs.
    ESXI: Dell C6100 XS23-TY3 Node - 1x L5630 + 24GB ECC REG + Brocade 1020 10GbE
    ZFS Server: Supermicro 826E1 + Supermicro X8DAH+-F + 1x L5630 + 24GB ECC REG + 10x 3TB HDDs + Brocade 1020 10GbE
    Lappy!: Lenovo Thinkpad W500: T9600 + 8GB + FireGL v5700 + 128GB Samsung 830 + 320GB 2.5" in ze dvd slot + 1920x1200 @ 15.4"


  6. #31
    Registered User
    Join Date
    Aug 2012
    Posts
    70
    The last of the server triple is now done.
    After a high capacity storage server and a fast data server, the only thing left was a compute server.

    Due to their excellent double precision performance I waited for the availability of the GK110 Kepler GPUs.
    The monitor is connected to a small VGA card to keep the 3 Titan's undisturbed crunching.



    rgds,
    Andy

  7. #32
    V3 Xeons coming soon!
    Join Date
    Nov 2005
    Location
    New Hampshire
    Posts
    36,363
    ^^ Can I borrow just one card? Just one measly one?
    Crunch with us, the XS WCG team
    The XS WCG team needs your support.
    A good project with good goals.
    Come join us,get that warm fuzzy feeling that you've done something good for mankind.

    Quote Originally Posted by Frisch View Post
    If you have lost faith in humanity, then hold a newborn in your hands.

  8. #33
    Registered User
    Join Date
    Aug 2012
    Posts
    70
    Quote Originally Posted by Movieman View Post
    ^^ Can I borrow just one card? Just one measly one?
    :-)
    Sure, stop by tommorrow for a traditional Vienna coffee and Sachertorte ....

    It was quite "hard" to get 4 cards. 8 Asus cards were shipped to Austria in total until now.
    4 cards 4 dealers. But they are fast. and silent. and cool. love it.
    My son likes them as well. and borrowed one. only for an hour. yesterday :-)

    Andy

  9. #34
    V3 Xeons coming soon!
    Join Date
    Nov 2005
    Location
    New Hampshire
    Posts
    36,363
    Quote Originally Posted by Andreas View Post
    :-)
    Sure, stop by tommorrow for a traditional Vienna coffee and Sachertorte ....

    It was quite "hard" to get 4 cards. 8 Asus cards were shipped to Austria in total until now.
    4 cards 4 dealers. But they are fast. and silent. and cool. love it.
    My son likes them as well. and borrowed one. only for an hour. yesterday :-)

    Andy
    I would love to drop by for a visit..Will 5PM be too early as that is the fastest flight I can get.

    Very nice system..Congrats..
    Crunch with us, the XS WCG team
    The XS WCG team needs your support.
    A good project with good goals.
    Come join us,get that warm fuzzy feeling that you've done something good for mankind.

    Quote Originally Posted by Frisch View Post
    If you have lost faith in humanity, then hold a newborn in your hands.

  10. #35
    Registered User
    Join Date
    Aug 2012
    Posts
    70
    Quote Originally Posted by Movieman View Post
    I would love to drop by for a visit..Will 5PM be too early as that is the fastest flight I can get.
    Very nice system..Congrats..
    5 pm. perfect .
    Tomorrow is my birthday and family dinner with friends start at 6pm. You are welcome.
    Too bad I can't send Wienerschnitzel and Apfelstrudel as email attachments

    Anyway, how is your Supermicro Hyperserver project doing?
    I do have a (serious) question: The E5 Xeons are locked down by Intel on BCLK, frequencies and mem speed (max 1600 MHz) . How is it possible, that Supermicro can run them at 1866 MHz mem speed? Are these special chips shipped by Intel? Any secrets you can share? Just curious.

    with kind regards,
    Andy

  11. #36
    V3 Xeons coming soon!
    Join Date
    Nov 2005
    Location
    New Hampshire
    Posts
    36,363
    Quote Originally Posted by Andreas View Post
    5 pm. perfect .
    Tomorrow is my birthday and family dinner with friends start at 6pm. You are welcome.
    Too bad I can't send Wienerschnitzel and Apfelstrudel as email attachments

    Anyway, how is your Supermicro Hyperserver project doing?
    I do have a (serious) question: The E5 Xeons are locked down by Intel on BCLK, frequencies and mem speed (max 1600 MHz) . How is it possible, that Supermicro can run them at 1866 MHz mem speed? Are these special chips shipped by Intel? Any secrets you can share? Just curious.

    with kind regards,
    Andy
    Funny you should mention this as the last parts to the build arrived today. No special chips and will try for 106BCLK stable and memory at 2000MHz..Wish me luck!
    Oh, real apple strudel....mouth waters..
    Crunch with us, the XS WCG team
    The XS WCG team needs your support.
    A good project with good goals.
    Come join us,get that warm fuzzy feeling that you've done something good for mankind.

    Quote Originally Posted by Frisch View Post
    If you have lost faith in humanity, then hold a newborn in your hands.

  12. #37
    Xtreme Enthusiast
    Join Date
    Mar 2010
    Location
    Minnesota
    Posts
    587
    Hey MM, where is your monster rig at? Thought you would have it here by now. Pure E-peen in this thread. Makes my dually at what I see and drool at.

  13. #38
    V3 Xeons coming soon!
    Join Date
    Nov 2005
    Location
    New Hampshire
    Posts
    36,363
    Last parts just got here today..Case and these new monster HS
    Crunch with us, the XS WCG team
    The XS WCG team needs your support.
    A good project with good goals.
    Come join us,get that warm fuzzy feeling that you've done something good for mankind.

    Quote Originally Posted by Frisch View Post
    If you have lost faith in humanity, then hold a newborn in your hands.

  14. #39
    Xtreme Enthusiast
    Join Date
    Mar 2010
    Location
    Minnesota
    Posts
    587
    Quote Originally Posted by Movieman View Post
    Last parts just got here today..Case and these new monster HS
    Better be posting some pics when you fire that beast up. Notify the nuke plant for extra power yet?
    Last edited by bearcatrp; 03-15-2013 at 07:34 PM.

  15. #40
    Registered User
    Join Date
    Aug 2012
    Posts
    70
    Just to complete the story. The full family is together now.

    10.752 CUDA cores
    24 GB GDDR5 RAM,
    > 1 TB/sec aggregated memory bandwidth (in the cards)
    ca. 6 TFlop/s (double precision), ca. 18 TFlop/s (SP) -
    (This is roughly comparable to the #1 position of the Top500 list in 2000, the ASCII White machine for approx 110 million US$)

    When PCI 3.0 support is turned on, each card can read/write with about 11 GB/sec on the PCI bus.
    For full concurrent PCI bandwidth of all 4 cards, a dual socket SB machine is needed with its 80 PCI lanes and better main memory bandwidth
    (With 1600 MHz DDR3, my dual socket SB delivers ca. 80 GB/sec with the stream benchmark)

    So, depending on the GPU workload, a LGA 2011 system might be ok (when compute / device memory bound) or a dual-SB board is needed when I/O bound.



    cheers,
    Andy
    Last edited by Andreas; 03-16-2013 at 08:00 AM.

  16. #41
    Xtreme Enthusiast
    Join Date
    Mar 2010
    Location
    Minnesota
    Posts
    587
    Quote Originally Posted by Andreas View Post
    When PCI 3.0 support is turned on

    cheers,
    Andy
    HUH? Could you explain? Your saying I have to turn on PCI 3 for my 3770K and my 6850? Thought it automatically did this or is this specific to nvidia cards?

  17. #42
    Registered User
    Join Date
    Aug 2012
    Posts
    70
    Quote Originally Posted by bearcatrp View Post
    HUH? Could you explain? Your saying I have to turn on PCI 3 for my 3770K and my 6850? Thought it automatically did this or is this specific to nvidia cards?
    There is a known issue with SandyBridge-E CPU's and NVidia cards.

    When Intel released their CPU's they where capable of PCIe 3.0 but yet not certified for the 8GT/s speed. NVidia claimed that there where a lot of timing variations in diverse chipsets and forced the Kepler cards on those CPUs and motherboards to PCIe 2.0 speed. Later they released a little utility where users could "switch" their systems to run in the faster PCI 3.0 mode.
    Here is the utility:http://nvidia.custhelp.com/app/answe...n-x79-platform
    Just use the GPU-Z utility to check which speed your system is using and use the utility.

    Generally speaking:
    The GTX Titan in its original mode (2.0) had 3,8 GB/sec write speed and 5,2 GB/sec read spead (tested with the utility from the CUDA SDK version 5.0)
    After switching the system to 3.0, both read and write are now in the 11 GB/sec range.

    People complain often about the sub-linear scaling of SLI and Triple-SLI systems, with sometimes negative scaling when a fourth card is added.
    If the application is using a lot of PCI bandwidth, the memory bus gets quickly overloaded by the demands of the graphic cards and the CPU.

    Some numbers:
    Max theoretical memory bandwidth (Max. theoretical = Guaranteed not to be exceeded)
    LGA-1155 socket with DDR3-1600 = 25,6 GB/sec (2 mem channels)
    LGA-2011 socket with DDR3-1600 = 51,2 GB/sec (4 mem channels)
    Dual LGA -2011 sockets with DDR3-1600 = 102,4 GB/sec (8 mem channels)

    Practical limits are strongly impacted by the memory access pattern and can range from 20% to 80% of the max speed.
    With the Stream benchmark, 80% seems to be the upper bound.

    PCI speed.
    Modern CPU feature PCIe 3.0 capabilities, with 1 GB/sec read and (concurrently) 1 GB/sec write speed per PCI-Express lane. So, a x16 PCIe 3.0 socket has a combined I/O speed of 32 GB/sec (16 read and 16 write), completely overwhelming the associated memory speed of an LGA-1155 system. If maximum I/O speed is to be achieved, the bottleneck memory bus has to be upgraded. This can be done with the LGA-2011 socket, which provides up to 40 GB/sec mem speed (measured by stream). "Unfortunately" the LGA-2011 has 40 PCI lanes which - if used effectively - would saturate the 4 memory channels of this system as well. This is what happens when multiple high I/O capable cards are being used (i.e. graphic cards). Even if the memory system would be able to provide enough bandwidth to the PCI subsystem, the CPU does need to compete for memory access as well. A further problem is the cache hierarchy in systems. To maintain memory coherency between what the CPU thinks is stored in the main memory and what devices see, the cache need to be updated or flushed if an I/O card is updating main memory. Which as a consequence, would increase the memory access times of the CPU to these memory addresses significantly (up to 10 fold).

    Some relief comes with dual socket LGA 2011 systems. Combined memory bandwidth doubles. Great. If all 4 GTX Titans would transmit data at the same time, there would still be some memory bandwidth be available for the 2 CPUs. To mitigate the aforementioned cache problem, Intel introduced in the dual socket Xeon Systems (Romley platform) a feature called DataDirect. Like in the single socket LGA-2011 system, data from I/O cards are written via the cache to main memory. To avoid that the cache gets completely flushed (easy when the cache is 20 MB and the transfer is i.e. 1 GB ), the hardware reserves only 20 % of the cache capacity for the I/O operation. Leaving enough valid cache content in place that the CPU can work effectively with the remaining capacity. Consequence: Much better and predictable CPU performance during high I/O loads.

    One problem is currently not well addressed in these systems. NUMA and I/O affinity. It will take time until applications like games will leverage the available information they could derive via the operating system how the architecture of the system they run on really looks like.

    Some examples:
    1) If the program thread runs on Core 0 (socket 0) and the main memory is allocated on its own socket = great. if memory is allocated on the other socket, a performance hit settles in.
    2) With Sandy/IvyBridge the PCI root complex is on die, creating much better performance on dual socket systems, but also dependencies. If your GPU is connected to a physical PCI slot which is connected to socket 0 and the program in need of the data resides in memory of socket 0, things are great. If the target memory is stored in socket 1, the data of the GPU (connected to socket 0) has somehow to get to socket 1. Here comes QPI (Quick Path Interconnect). If QPI is set in your ROM to energy efficient sleep modes, it always has to wake up to transfer data. Keep it alive for max performance.

    It is simple:
    For compute bound problems look for the right CPU. For data or IO bound problems look at the memory and IO architecture (and basically forget the CPU)

    cheers,
    Andy
    Last edited by Andreas; 03-16-2013 at 01:50 PM.

  18. #43
    Xtreme Addict Evantaur's Avatar
    Join Date
    Jul 2011
    Location
    Finland
    Posts
    1,043


    geek :banana::banana::banana::banana:!

    I like large posteriors and I cannot prevaricate

  19. #44
    V3 Xeons coming soon!
    Join Date
    Nov 2005
    Location
    New Hampshire
    Posts
    36,363
    ^^ You got that right..I see those four cards and think that one system would have heated my whole house this winter!
    Crunch with us, the XS WCG team
    The XS WCG team needs your support.
    A good project with good goals.
    Come join us,get that warm fuzzy feeling that you've done something good for mankind.

    Quote Originally Posted by Frisch View Post
    If you have lost faith in humanity, then hold a newborn in your hands.

  20. #45
    Xtreme Member
    Join Date
    Oct 2012
    Posts
    448
    I was thinking,"Holy cannoli, this rig has more computing power with the GPU's than the early supercomputers".

    and, "Man, I paid less for my first car than this rig probably cost".


    and finally: "What the heck is he going to do with all of this computing power?"
    Desktop rigs:
    Oysterhead- Intel i5-2320 CPU@3.0Ghz, Zalman 9500AT2, 8Gb Patriot 1333Mhz DDR3 RAM, 120Gb Kingston V200+ SSD, 1Tb Seagate HD, Linux Mint 17 Cinnamon 64 bit, LG 330W PSU

    Flying Frog Brigade-Intel Xeon W3520@2.66Ghz, 6Gb Hynix 1066Mhz DDR3 RAM, 640Gb Hitachi HD, 512Mb GDDR5 AMD HD4870, Mac OSX 10.6.8/Linux Mint 14 Cinnamon dual boot

    Laptop:
    Colonel Claypool-Intel T6600 Core 2 Duo, 4Gb 1066Mhz DDR3 RAM, 1Gb GDDR3 Nvidia 230M,240Gb Edge SATA6 SSD, Windows 7 Home 64 bit




  21. #46
    Xtreme Enthusiast
    Join Date
    Mar 2010
    Location
    Minnesota
    Posts
    587

    Smile

    Quote Originally Posted by yojimbo197 View Post
    and finally: "What the heck is he going to do with all of this computing power?"
    3D :banana::banana::banana::banana:!
    Am curious at how much juice this supercomputer eats @100% load of the GPU's? Are all the PCI-E slots full x16 slots?
    Last edited by bearcatrp; 03-16-2013 at 10:21 PM.

  22. #47
    Xtreme Member
    Join Date
    Jun 2009
    Posts
    101
    Quote Originally Posted by Andreas View Post
    Just to complete the story. The full family is together now.

    10.752 CUDA cores
    24 GB GDDR5 RAM,
    > 1 TB/sec aggregated memory bandwidth (in the cards)
    ca. 6 TFlop/s (double precision), ca. 18 TFlop/s (SP) -
    (This is roughly comparable to the #1 position of the Top500 list in 2000, the ASCII White machine for approx 110 million US$)

    When PCI 3.0 support is turned on, each card can read/write with about 11 GB/sec on the PCI bus.
    For full concurrent PCI bandwidth of all 4 cards, a dual socket SB machine is needed with its 80 PCI lanes and better main memory bandwidth
    (With 1600 MHz DDR3, my dual socket SB delivers ca. 80 GB/sec with the stream benchmark)

    So, depending on the GPU workload, a LGA 2011 system might be ok (when compute / device memory bound) or a dual-SB board is needed when I/O bound.

    cheers,
    Andy
    Thanks for the fact regarding your system, it's mind-blowing to think of the power you can fit into a system a decade later and can only make you wonder what the computer will be capable of in the another 10 years. Only if you could go back in time with your machine and sell it to the highest bidder!!!!

  23. #48
    Crunching For The Points! NKrader's Avatar
    Join Date
    Dec 2005
    Location
    Renton WA, USA
    Posts
    2,891
    Quote Originally Posted by bearcatrp View Post
    3D :banana::banana::banana::banana:!
    on a side note
    I love those, bluray with 3d samsung 55" led..

    so much win you cant even explain

  24. #49
    V3 Xeons coming soon!
    Join Date
    Nov 2005
    Location
    New Hampshire
    Posts
    36,363
    Quote Originally Posted by NKrader View Post
    on a side note
    I love those, bluray with 3d samsung 55" led..

    so much win you cant even explain
    ^^..Future owner of the West Coast Viagra francise..
    Crunch with us, the XS WCG team
    The XS WCG team needs your support.
    A good project with good goals.
    Come join us,get that warm fuzzy feeling that you've done something good for mankind.

    Quote Originally Posted by Frisch View Post
    If you have lost faith in humanity, then hold a newborn in your hands.

  25. #50
    Xtreme Cruncher
    Join Date
    Jun 2009
    Location
    Bombay , India
    Posts
    454
    Hey guys.. just finished up the build for my Storage server...
    Its nothing compared to the monsters that run loose here

    The config is pretty self explanatory from the pictures

    Basically its the following:-

    Cosmos s2
    i3-2120
    Maximus 5 formula
    Lsi 9261-8i Raid Card ( Setup in Raid 5)
    24TB of WD Red Hdd's
    Plextor 128GB SSD For Boot
    Ax 850
    16GB Ram
    Gtx 650
    H80

    The Pictures



























    Cheers and Kind Regards Always !




    Intel Core i7 3960x | 3x Intel Core i7 980x | 3x Intel Core i7 920 | 2x Intel Core i7 2600k | 2x AMD opteron 6282 SE | 3x Asus Rampage II Extreme | 2x Asus Rampage III Extreme | Asus Rampage IV Extreme | 1x Gigabyte X58A-UD7 | 2x Asus Maximus IV Extreme-Z | Asus KGPE-D16 | 3x 6GB DDR3 Corsair Dominator 1600 Cl7 | 1x Patriot Viper II 6GB 2000 Cl8 | 3x Corsair Hx1000 | 4x Corsair Ax 1200 | 3x Antec 1200 | 4x Corsair Obsidian 800D | 2x Intel 80Gb G2 SSD | 4x Kingston HyperX 120GB | 2x Vertex 2 120GB | 2xWD 150GB Velociraptor + 1x WD 300GB Velociraptor +5TB | Msi Nvidia Gtx 295 | Msi Ati 4870x2 OC Edition | 2x Msi Ati 4890 OC Editions| 2x Sapphire Ati 5870's| Sapphire 5970 OC Edition | 2x Msi Gtx 460 | 3x Sapphire 6970 | 3x Asus Gtx 580 | 3x Asus 7970

Page 2 of 9 FirstFirst 12345 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •