This is the thread where you post pics/stats/use of your server!
Printable View
This is the thread where you post pics/stats/use of your server!
Not an enthusiast one but here is our little but cute server. Runs ms server 2012.
http://dl.dropbox.com/u/60873/server.jpg
This is a little beast I built awhile ago for a HFT client.
Hardware was 3 servers sporting 3960x's overclocked to 4.8ghz water cooled.
I went and installed this beast at Equinix NY4. Last time I talked with the client he said I keep loading it but it doesn't slow down LOL
The server rack cabinet is mine and stayed here, I just need to fill it back up again.
http://imageshack.us/a/img338/7420/20120601150538.jpg
http://imageshack.us/a/img534/9373/h...llfinished.jpg
Yes it is big, but fits right in my work area very well.
I should load it up with a huge water cooling systems and put my main rig inside LOL
a single p4 prescott rig could be a server, its all about what you can do, no judging, just appreciating others stuff :-)
Upgrading mine soon, will do some pics for this thread :)
Found one of my OLD build, not the current one
https://fbcdn-sphotos-g-a.akamaihd.n...12778475_n.jpg
Specs on that one:
E350 by MSI
4GB Corsair Vengeance
Dell PERC 6/i modded with AC GPU cooler, 92mm fan on huge alu block...
picopsu 125W VI
3x F4EG 2TB
Corsair Nova 128GB
Current:
i3-3225 @ 3.5ghz low vcore
Antec H620
Lian Li chassie
GA-Z77MX-D3H
8GB GeIL Enhance
Dell PERC 6/i
Cheiftec 350W 80+ Bronze
4x F4EG 2TB
Corsair Nova 128GB
Next upgrade (waiting for one more part, this part being a cable...)
i3-3225 @ 3.5ghz low vcore
Antec H620
Lian Li chassie
GA-Z77MX-D3H
16GB Kingston confused ram..... Blu RED edition
Dell PERC H700 modded with ThermalRight HR-05 SLI chipset cooler (and 2x 80mm AC 80MM fans ~5v)
Antec Earthwatts 80+ plat
4x F4EG 2TB
Corsair Nova 128GB
Next step for the server; 6*4TB RAID5 + 2x500GB 2.5" RAID1
Power consumption is ~82W at the moment, expected to fall once going H700 + 80 plat, aiming for 70W
I think this makes me the central distribution hub for World Community Grid in South Africa :D
http://i140.photobucket.com/albums/r...201-WA0017.jpg
2x Xeon E5645 at 3.24 GHz (180 MHz x 18, 185 MHz and above gives a weird slow-down issue)
2x Thermaltake Frio CPU Coolers
1x EVGA Classified SR-2
12x 4 GB (48 GB total) Patriot G2 Series DDR3-1600 (picture shows 16 GB, was taken before my previous payday :p:)
1x Intel 320 Series 120 GB SSD
2x Seagate 500 GB Hard Drive (ST500DM002)
1x Samsung 1.5 TB Hard Drive (HD154UI)
1x Seagate 2 TB Hard Drive (ST2000DL003)
1x Seagate 3 TB Hard Drive (ST3000DM001)
2x Gigabyte GeForce GTX 580 SuperOverclock (picture shows 1x Gigabyte, 1x Asus)
1x Corsair HX1050 Gold PSU
1x Corsair Obsidian 800D Case
0x Optical Drive (digital downloads FTW)
Using this right now but next Thursday some new goodies coming...:wasntme:
http://www.xtremesystems.org/forums/...ket-2011-Beast!
The above system has been a gem, solid and not one complaint.
100% load since April and even walked in one day when the house was 93F and cpu's at 81C and not a hiccup..:up:
Last autumn, I built 3 servers for a study project. People might be interested in the hw configs.
Server number 1 (single socket):
Before building the faster server, this one served as a kind of testbed. The goal was to configure it for good I/O performance.
i7-3930K, 64GB, 32 x SSDs via 4 LSI controllers, Mellanox 40 Gbit/sec network adapter
Fast I/O needs parallelism, like cpu bound programs need multicore cpus (or gpu's, if the application fits to this hw approach)
http://www.pbase.com/andrease/image/...5/original.jpg
Before settling on the Samsung SSDs, I tested approx. 12 different SSDs for my workload. A few of those can be seen here.
http://www.pbase.com/andrease/image/...1/original.jpg
one of the major benefits of the 4 memory channels of the LGA-2011 socket. They allow decent read performance beyond any LGA-1155 cpu.
16.7 GB/sec
http://www.pbase.com/andrease/image/...0/original.jpg
Combined workloads with a simultaneous load on the cpu and io subsystem proofed the value of watercooling. The stock and be quiet air coolers limited overall performance. Using a watercooler increased overall performance by 30% on the non-overclocked cpu.
1.3 mio iops with 20 ssd. cpu is limiting the io subsystem here.
http://www.pbase.com/andrease/image/...2/original.jpg
For flexibility and to facilitate many reconfigurations, i used long sata cables. Optically not nice, but useful.
This picture was taken when the stock vent was still on. wprime and super pi aren't taxing the CPU as heavily as for instance ycrunch, so the benefit of watercooling might not be as big.
http://www.pbase.com/andrease/image/...8/original.jpg
This server was used for the record runs in ycrunch on calculating pi for 25, 50 and 100 billion digits in the category "single socket systems"
http://www.xtremesystems.org/forums/...=1#post5153478
The other 2 servers had different design goals. One was build for peak io performance on 2 sockets, utilizing 2 x E5-2687W, 256 GB and 48 ssd's. The goal with the third server was to get 72 terabyte capacity under 150 watt on the plug outlet using one E3-1245v2
http://www.pbase.com/andrease/image/...9/original.jpg
cheers,
Andy
Andy,
Omfg.
This section was made for you lol.
-PB
WoW, super sweet setup !
holy cow, I think those controllers and SSDs together cost more than my student income for a couple of years :p:
Andreas: I would say OMG! but that is too mild! LOLOL
:-)
Last year, I saw your Romley system. Soon after Intel released the E5-2687W CPUs ...
I did see many systems over the years, but from a total system perspective, this platform really shines.
Especially I/O is at an astonishing level. 6 x LSI controllers make shure data can pass from the ssd's via the pci express 3.0 lanes to memory without too much speed brakes.
http://www.pbase.com/andrease/image/...1/original.jpg
During the haydays of the project, quite a few SATA cables left the box.
http://www.pbase.com/andrease/image/...8/original.jpg
rgds,
Andy
OMG Andreas.. I don't know what to say.. Only that I love your servers..
Oj101, NapalmV, Movieman... I see so many Masters here, this new forum is so exciting :up:
All the XS wisdom now in server flavor too. Great! :clap:
Some awesome stuff in this thread, even my machines that could be considered impressive are put down.
That said below are some stuff I got to do last year!
We start off with several retail SSD's and Intel CPUs.
http://i.imgur.com/44W3slZh.jpg
These are a combo of Samsung 830's, Xeon E5's and Xeon E3's.
The best machine that I built was a high density 1U server. With an LSI-9266-4i, Samsung 830 256GB, and 2x1TB Seagate SAS. 128GB of ECC-REG 1600 ram, and Dual E5-2670's
http://i.imgur.com/yWmGD81h.jpg
http://i.imgur.com/R6wRgLQh.jpg
Here's another machine that I've setup with RAID-10 Crucial M4 512GB SSD's on a LSI2208 based ROC.
Dual E5-2660 and "only" 64GB of ram.
http://i.imgur.com/Sq6cPxch.jpg
The picture of the rack in LA.
http://i.imgur.com/vkoY1KY.jpg
http://i.imgur.com/r7dFKDD.jpg
Excuse the crappy cabling job. Had a few machines provisioned as we were on a deadline, thus we weren't able to remove the cables to a few machines without disrupting service so we tried our best to manage it, and I think for my first time in a DC + rack, I did an okay job.
Very nice Cook !
Your down in Irvine I see. I have a client down there... TGS, they are running 40,000+ cores last time I was in there.
Looks at my rack here... man I need to fill this puppy up with stuff like this :)
A cost effective way to fill it up
http://www.servethehome.com/Server-d...ode-8-sockets/
rgds,
Andy
Yup, I am indeed. Was planning to provision the gear in a datacenter in Irvine, but the cost of bandwidth there and lack of 10G and transit options swayed us away. We currently have a 10G uplink per rack for our use and is it fun to play with. If only I can get my own dwdm gear setup from LA to Irvine =)
Andy your killing me man, I would love a bunch of those LOL :)
Dang those are very nice prices tho !
Yeah I am not sure how they do it connection wise, but their server setup is very impressive. It's all their's, no hosted space.
Bought one with 2 nodes each holding dual L5520's and 48Gigs of ram. New ESXi nodes :D
Soon as i get the server I'll post pictures from my rack cabin.
Goin to hold:
HP1800-24G
Dell2848
Dell C6100 XS23-TY3
Custom ZFS server in Supermicro 823
Dell PowerVault MD1000
HP Storageworks MSA70
5000VA APC Rack UPS
The last of the server triple is now done.
After a high capacity storage server and a fast data server, the only thing left was a compute server.
Due to their excellent double precision performance I waited for the availability of the GK110 Kepler GPUs.
The monitor is connected to a small VGA card to keep the 3 Titan's undisturbed crunching.
http://www.pbase.com/andrease/image/...2/original.jpg
rgds,
Andy
^^ Can I borrow just one card? Just one measly one? :D
:-)
Sure, stop by tommorrow for a traditional Vienna coffee and Sachertorte ....
It was quite "hard" to get 4 cards. 8 Asus cards were shipped to Austria in total until now.
4 cards 4 dealers. But they are fast. and silent. and cool. love it.
My son likes them as well. and borrowed one. only for an hour. yesterday :-)
Andy
5 pm. perfect .
Tomorrow is my birthday and family dinner with friends start at 6pm. You are welcome.
Too bad I can't send Wienerschnitzel and Apfelstrudel as email attachments :)
Anyway, how is your Supermicro Hyperserver project doing?
I do have a (serious) question: The E5 Xeons are locked down by Intel on BCLK, frequencies and mem speed (max 1600 MHz) . How is it possible, that Supermicro can run them at 1866 MHz mem speed? Are these special chips shipped by Intel? Any secrets you can share? Just curious.
with kind regards,
Andy
Hey MM, where is your monster rig at? Thought you would have it here by now. Pure E-peen in this thread. Makes my dually :eek2: at what I see and drool at.
Last parts just got here today..Case and these new monster HS
Just to complete the story. The full family is together now.
10.752 CUDA cores
24 GB GDDR5 RAM,
> 1 TB/sec aggregated memory bandwidth (in the cards)
ca. 6 TFlop/s (double precision), ca. 18 TFlop/s (SP) -
(This is roughly comparable to the #1 position of the Top500 list in 2000, the ASCII White machine for approx 110 million US$)
When PCI 3.0 support is turned on, each card can read/write with about 11 GB/sec on the PCI bus.
For full concurrent PCI bandwidth of all 4 cards, a dual socket SB machine is needed with its 80 PCI lanes and better main memory bandwidth
(With 1600 MHz DDR3, my dual socket SB delivers ca. 80 GB/sec with the stream benchmark)
So, depending on the GPU workload, a LGA 2011 system might be ok (when compute / device memory bound) or a dual-SB board is needed when I/O bound.
http://www.pbase.com/andrease/image/...8/original.jpg
cheers,
Andy
There is a known issue with SandyBridge-E CPU's and NVidia cards.
When Intel released their CPU's they where capable of PCIe 3.0 but yet not certified for the 8GT/s speed. NVidia claimed that there where a lot of timing variations in diverse chipsets and forced the Kepler cards on those CPUs and motherboards to PCIe 2.0 speed. Later they released a little utility where users could "switch" their systems to run in the faster PCI 3.0 mode.
Here is the utility:http://nvidia.custhelp.com/app/answe...n-x79-platform
Just use the GPU-Z utility to check which speed your system is using and use the utility.
Generally speaking:
The GTX Titan in its original mode (2.0) had 3,8 GB/sec write speed and 5,2 GB/sec read spead (tested with the utility from the CUDA SDK version 5.0)
After switching the system to 3.0, both read and write are now in the 11 GB/sec range.
People complain often about the sub-linear scaling of SLI and Triple-SLI systems, with sometimes negative scaling when a fourth card is added.
If the application is using a lot of PCI bandwidth, the memory bus gets quickly overloaded by the demands of the graphic cards and the CPU.
Some numbers:
Max theoretical memory bandwidth (Max. theoretical = Guaranteed not to be exceeded)
LGA-1155 socket with DDR3-1600 = 25,6 GB/sec (2 mem channels)
LGA-2011 socket with DDR3-1600 = 51,2 GB/sec (4 mem channels)
Dual LGA -2011 sockets with DDR3-1600 = 102,4 GB/sec (8 mem channels)
Practical limits are strongly impacted by the memory access pattern and can range from 20% to 80% of the max speed.
With the Stream benchmark, 80% seems to be the upper bound.
PCI speed.
Modern CPU feature PCIe 3.0 capabilities, with 1 GB/sec read and (concurrently) 1 GB/sec write speed per PCI-Express lane. So, a x16 PCIe 3.0 socket has a combined I/O speed of 32 GB/sec (16 read and 16 write), completely overwhelming the associated memory speed of an LGA-1155 system. If maximum I/O speed is to be achieved, the bottleneck memory bus has to be upgraded. This can be done with the LGA-2011 socket, which provides up to 40 GB/sec mem speed (measured by stream). "Unfortunately" the LGA-2011 has 40 PCI lanes which - if used effectively - would saturate the 4 memory channels of this system as well. This is what happens when multiple high I/O capable cards are being used (i.e. graphic cards). Even if the memory system would be able to provide enough bandwidth to the PCI subsystem, the CPU does need to compete for memory access as well. A further problem is the cache hierarchy in systems. To maintain memory coherency between what the CPU thinks is stored in the main memory and what devices see, the cache need to be updated or flushed if an I/O card is updating main memory. Which as a consequence, would increase the memory access times of the CPU to these memory addresses significantly (up to 10 fold).
Some relief comes with dual socket LGA 2011 systems. Combined memory bandwidth doubles. Great. If all 4 GTX Titans would transmit data at the same time, there would still be some memory bandwidth be available for the 2 CPUs. To mitigate the aforementioned cache problem, Intel introduced in the dual socket Xeon Systems (Romley platform) a feature called DataDirect. Like in the single socket LGA-2011 system, data from I/O cards are written via the cache to main memory. To avoid that the cache gets completely flushed (easy when the cache is 20 MB and the transfer is i.e. 1 GB ), the hardware reserves only 20 % of the cache capacity for the I/O operation. Leaving enough valid cache content in place that the CPU can work effectively with the remaining capacity. Consequence: Much better and predictable CPU performance during high I/O loads.
One problem is currently not well addressed in these systems. NUMA and I/O affinity. It will take time until applications like games will leverage the available information they could derive via the operating system how the architecture of the system they run on really looks like.
Some examples:
1) If the program thread runs on Core 0 (socket 0) and the main memory is allocated on its own socket = great. if memory is allocated on the other socket, a performance hit settles in.
2) With Sandy/IvyBridge the PCI root complex is on die, creating much better performance on dual socket systems, but also dependencies. If your GPU is connected to a physical PCI slot which is connected to socket 0 and the program in need of the data resides in memory of socket 0, things are great. If the target memory is stored in socket 1, the data of the GPU (connected to socket 0) has somehow to get to socket 1. Here comes QPI (Quick Path Interconnect). If QPI is set in your ROM to energy efficient sleep modes, it always has to wake up to transfer data. Keep it alive for max performance.
It is simple:
For compute bound problems look for the right CPU. For data or IO bound problems look at the memory and IO architecture (and basically forget the CPU)
cheers,
Andy
http://www.pbase.com/andrease/image/...8/original.jpg
geek :banana::banana::banana::banana:! :rofl:
^^ You got that right..I see those four cards and think that one system would have heated my whole house this winter! :D
I was thinking,"Holy cannoli, this rig has more computing power with the GPU's than the early supercomputers".
and, "Man, I paid less for my first car than this rig probably cost".
and finally: "What the heck is he going to do with all of this computing power?"
Thanks for the fact regarding your system, it's mind-blowing to think of the power you can fit into a system a decade later and can only make you wonder what the computer will be capable of in the another 10 years. Only if you could go back in time with your machine and sell it to the highest bidder!!!!
Hey guys.. just finished up the build for my Storage server...
Its nothing compared to the monsters that run loose here :p:
The config is pretty self explanatory from the pictures :)
Basically its the following:-
Cosmos s2
i3-2120
Maximus 5 formula
Lsi 9261-8i Raid Card ( Setup in Raid 5)
24TB of WD Red Hdd's
Plextor 128GB SSD For Boot
Ax 850
16GB Ram
Gtx 650
H80
The Pictures :)
http://i.imgur.com/A3fBrz0.jpg
http://i.imgur.com/rIkYUue.jpg
http://i.imgur.com/BlGni7X.jpg
http://i.imgur.com/Ts9Wuui.jpg
http://i.imgur.com/yRsXzGO.jpg
http://i.imgur.com/FKQhxCu.jpg
http://i.imgur.com/0BrvZYJ.jpg
http://i.imgur.com/zcB0cme.jpg
http://i.imgur.com/59HlPSD.jpg
http://i.imgur.com/rHZNgQs.jpg
http://i.imgur.com/pseAUK8.jpg
http://i.imgur.com/nxmXPWt.jpg
http://i.imgur.com/zNPo1al.jpg
Cheers and Kind Regards Always ! :up:
Well, my workstation project isn't as awesome as most of the rigs here, but I still want to share my progress. I am building a picture frame workstation case with an ASUS KFSN4-DRE/SAS, 2 Opteron 2358SE's Quad Core 2.4GHz, Firepro V3900, 16GBs DDR2-667 RAM, 750 Watt PSU, and 4 RAIDed 36GB 10K SAS HDD's. Total cost under $300. I am still waiting for the drives, PSU, and SAS cables, they are in transit now. I will start my own thread with all my progress as soon as I get my hardware in.
http://s20.postimg.org/d2f1ikdml/IMG_0582.jpg
http://s20.postimg.org/lfyiiggzh/IMG_0600.jpg
http://s20.postimg.org/mjimuf1ml/IMG_0601.jpg
http://s20.postimg.org/gnax1si65/IMG_0584.jpg
I'm working on a <$300 budget rig myself, should have pretty close perf to yours except for seq disk transfers. I went with an Intel Dual S1366 board, 16GB of ram, a 120GB SSD (refurb), and pair of Xeon EC3539s. Shopping mostly on Ebay, but with the SSD, case, and CPU coolers coming from Newegg, I'm at a total of $382 including shipping. I had a spare PSU to use with it, and spare spindle disks, so those costs aren't included. Just waiting for the last parts (the mobo and SSD) to arrive so I can spin it up. Hopefully it'll work out well. Mine is going to be crammed into a pretty small mid tower ATX case once I get the mods done.
--Matt
Well, I would say your rig will be higher spec'ed than mine. I am limited to DDR2 which is a bummer for me. Also I would like to benchmark our systems once their done, I believe even though your CPU's are lower clocked, they will perform better and they are lower TDP as well, 65 watt XEON to my 137 watt Opty's. I would like to also see how quick your rig boots compared to mine. I'm sure mine would still be a slouch but I can't see myself running SSD's yet. The affordable ones have mixed reviews and the expensive ones are, well, expenisive. Doesn't help that I already bought an SSD 2 years ago and lasted me an afternoon before it dissapeared. Out of all my rigs, this is the first time I have attemped using SAS HDD's, I hope it turns out good for me. Hope your rig does work out well! keep me posted, there is not enough people building dually's just for their home PC, we are far and few in between.
I guess we'll see. The board I ended up with, due to cost constraints,only has a pair of memory channels for each processor. The SSD is a refurb'd Corsair F120GB2. It's definitely not as good as any of my other SSDs, and sort of a wildcard for reliability, but hopefully better than the spindle disks I have laying around. These chips also only have 2.5 GT/s QPI, which probably sets em back a bit.
I'm down for some bench runs. I'm not sure what I'll be using is box for quite yet, but it'll probably end up becoming my dev workstation if it works out well.
--Matt
Well I don't build, need or use any of the high-end servers I've seen in this thread, but have always liked building and using duallie workstations. Here's a compilation of photos duallies I've built over the years that I found on various hard drives. This is probably 2/3 of the systems I've had over the years. I built all of them except the Compaq which I got cheap on eBay with some 1.7 GHz Foster Core Xeons that I replaced with 2.6 GHz Prestonia Core Xeons. I don't have a photo of my first duallie build back in 2000 which was a Rioworks PDVIA dual slot 1 with dual PIII 700 Coppermines OC'd to 933 MHz mounted on Asus S370-DL slotkets. The slotkets provided core voltage adjustment to get the OC stable. Even though it had low memory bandwidth with that VIA chipset, it ran real sweet and I got hooked on duallies. There were other MSI K8T Master2 FARs, Asus K8N-DLs, Iwill ZMax DPs, and Supermicro XDAL-Es that I can't find photos of.
AMD Duallies
http://i50.tinypic.com/wryw74.jpg
http://i49.tinypic.com/dbmpg2.jpg
http://i47.tinypic.com/2lnejxv.jpg
Intel Duallies
http://i50.tinypic.com/no7xi8.jpg
http://i45.tinypic.com/2ldyey8.jpg
My current duallie is in my sig, an mATX Supermicro X7DCA-L with 2xL5408 Xeons tape modded to 2.66 GHz, 24GB Crucial DDR2-667 Reg/ECC, an Intel 80GB SSD boot drive, and a single slot width PowerColor HD 7750 Video card. It's kind of a reprise of one I built back in 2008 in a Lian Li mATX case. With the low voltage xeons it runs very cool and virtually silent - the PSU is a Rosewill fanless modular 500W Silent Night which adds to the silence. I bought the mobo and 8GB of the RAM at anoher forum for $165 shipped as a combo with a pair of 2 GHz E5405s but swapped them out for a pair of L5408s I picked up on eBay for $12 shipped. Here are photos of the initial setup to get it running with a GTX 430 video card I had laying around. And, believe it or not, I got it up and running using an Enermax 270W SFX mATX PSU!
http://i46.tinypic.com/10g9yu1.jpg
http://i47.tinypic.com/s3mujp.jpg
I forgot how loud those Dynatron fans are at boot up, so I replaced them with some quieter 80mm 2,500 RPM fans I had laying around.
http://i46.tinypic.com/b85pbm.jpg
Finally I put it in this Xigmatek GIGAS mATX Cube Case. The case comes with four 120mm 1,000 rpm fans - two in front and two at the rear. I'm using a fan controller that came with the case to turn down the CPU fans and the two front fans to keep it almost dead silent.
http://i45.tinypic.com/2hxnn1u.jpg
Here are some graphs showing how power efficient the L5408s (running @ 2.66 GHz like the L5430s shown in the graphs) actually are when running on a San Clemente (5100) chipset motherboard (no FB-DIMMS). The graphs are from this review
http://i47.tinypic.com/2elqq36.jpg
The Xigmatek GIGAS two mountings just for SSDs and another 6 for hard drives. I added a $12 USB 3.0 PCI-e x1 card to activate that front panel USB 3.0 port and installed a 3.5 inch media reader I had laying around with a 5.25 inch front panel bay adapter. I picked up 16GB more of Crucial DDR2 667 Reg/ECC for $60 to get the RAM up to 24 GB and got the single slot 7750 so I could use the adjacent PCI-e x8 slot for a RAID card in the future. But it's been running for a couple of months now and what started as a trip down memory lane has yielded a really nicely running, compact, inexpensive, quiet 8-core system.
Nice systems. Have built 4 duallies in the past 4 years. 2 harpertown rigs and 2 westmere rigs. Crunched with them until they sold. Here is what I currently have:
2 intel X5670's westmere processors
48 GB ram
Saphire HD7950 GPU
Corsair HX850 PSU
256 SSD, 500GB and 2 TB drives
Corsair 800D case.
Will post some pictures when I move it back where it belongs. Just finished transfering from a smaller case. This case was worth the money.
I dint want to retired him but the scsi disk"s hard to found in big capacities.
So retired but operational . : )
http://i.imgur.com/TdVET0u.jpg
http://i.imgur.com/zR7T1oa.jpg
http://i.imgur.com/Ri1uQJ5.jpg
ahh, we're doing oldies! :D
This is sitting downstairs on a desk just as you see in this pic..Identical with 4-Intel 700MHz slot one "xeons" in it with 16-1 gig sticks of PC-100 ECC REG..Try and find those today! LOL
Weighs 102lbs and when it was delivered UPS sent two guys!
I even have all the manuals and diskettes that came with it.
I heard this was over $22,000.00 when new..I got it off ebay years ago for $125.00 and $208.00 shipping from California to me.
AND..Yes, it does still work!..:D
http://www.supermicro.com/products/s...0/sys-8050.cfm
MM, anyway to use todays parts for that case? Man, that would be one hell of a server using todays xeons.
We put in a system for the IRS 20 years ago that has been through two major HW "refreshes" and we're now planning #3 for next year. But we still have a Dell powerEdge 2400 dual PIII 400s running Solaris in the system - with massive 9 GB SCSI drives! Every site has 3 of them and they are basically no trouble at all - just beginning to slow things down a bit.
:)
Compaq Proliant 6000
4 x Xeon 450 MHz
4 GB RAM
18 x 9 GB SCSI disks, connected via 3 SCSI buses 40 MB/sec each. Max I/O rate = 90-100 MB/sec
http://www.pbase.com/andrease/image/...0/original.jpg
http://www.pbase.com/andrease/image/...6/original.jpg
Andy
Good grief what a monster !
I bet that kicks out some heat & noise when running :)
Keep the oldies coming!
I know some of you XSers ran DP AXP setups, Show me one cooled by a pair of Big Typhoons!
lol if we're doing oldies then I will dig up my Compaq ML330 G2!
Had 2x P3 Xeons, 4GB RAM, boatloads of Hard Drives, multiple Sound/VGA/RAID cards the works.
Even had to do a custom PWM solder job for the case fan so that I could run it quietly in my college room under my bed (ran the CS etc servers for Uni).
-PB
Heat is ok, as the CPUs in those days didn't use too much power. But the noise level is deafening, nothing for the home office room.
wrt to size, the biggest of those Compaq ProLiant servers in my collection is the Proliant 8000. 8 Xeons connected by a proprietary ProFusion chip. 16 GB in 1998 costed a fortune and total weight with 21 SCSI drives is approx 240 pounds. Triple PSUs provided decent failure protection.
Bought it last year for 80 $ - found it in a barn ....
Andy
Something in my collection
http://koti.mbnet.fi/ipe91/b/pentiumpro/IMG_4628.JPG
Quad PPro board which supports up to 4GB of SDRAM :D
You know, we were posting about are older yet capable dually's with 2+ Ghz, quad core's+, and capable of using 1TB hdd's. Now all of a sudden these systems are being categorized along with these massive, power hungry, prehistoric calculators! :rolleyes:
Kudos though for keeping those from the scrap heap though! I hate to see people discard their old PC just because they have something newer, unless you discard it into my hands, then I forgive you. LOL
The case is great. The noise and power level is less than stellar.
My current homeserver has 72 TB, uses 150 watt with all disks spinning and an E3-1245v2 plus 32GB RAM for a few VMs.
http://www.pbase.com/andrease/image/...4/original.jpg
And the noise level is very low.
Andy
Andreas what kind of case is that?
how may harddrives, are they hot swappable?
Yes sir !
Not with big typhoons but watercooled :D
http://i.imgur.com/XXV7arR.jpg
Actually this system is not anymore watercooled cos replace my frend family pc.
Only thing Xtreme about my server is its size. It is just for my own personal use, mostly just downloading and mass storage.
G555 Cele
P8H77-I
LSI 9211-8i HBA
8GB KVR 1333
40GB X25-V boot drive
25TB Drive Bender storage pool.
http://i18.photobucket.com/albums/b1...psc9863a9b.jpg
http://i18.photobucket.com/albums/b1...ps57349d13.jpg
network transfer speed
http://hostthenpost.org/uploads/ad31...97c205ea81.jpg
This is the Lian-Li 343B Servercube. It has 18 x 5,25 inch slots, which I filled with 6 LianLi HD cages (4 drives per cage, hot swappable).
The E3-1245v2 is extremely efficient, less than 2 Watt when idle. I use the built in graphics unit for additional power savings.
Raid controller used is the new Adaptec 7 series with 24 ports and a flash memory based "battery backup system"
Disk drives are 24 x WD Red 3TB, two Raid0 SSDs for the OS.
Network connection via teamed 4xGbit which sustains ca. 450 MB/s for read and write to the server.
Normally between 4-6 Virtual machines are active for various functions. One VM hosts WHS which keeps daily backups of all machines at home.
Disk deduplication is turned on, which saves in my case ca. 30% disk capacity
I think the cube is discontinued, the new server case is the D8000
Andy
I still want to know what Andy does for a living :D
-PB
He spends his days starring at his collection of 10,000 24lb gold bullion bars.! :D
Andy: I hav a new version of that LL343B that isn't the D8000...:wasntme:
Set up for 2-280 rads oin the top and one in the inside bottom..
I think there are three that exist and one is sitting here.
Even the LL rep didn't beleive I had it till I showed him a pic..:rofl:
That's interesting. Over here in Europe, this case came and went multiple times on and off the market. I've seen posts in german server forums that the german subsidiary of Lian Li seems to be behind some of the developments of this case - but can't verify the accuracy of such statements. It might be that you have one of those cases which were sold in Germany but was at the same time listed on the ww website as discontinued.
Anyway, it is a convenient product if you need to get many disk drives into one case. With proper fans (I use Noctua) the noise level is super low. An added benefit of the low power design goal. Low power -> low heat -> low fan speed -> low noise. No component is above 45 degrees.
rgds,
Andy
So have you received your motherboard and SSD yet? I have decided to use 3X seagate cheetah 15k.5 146GB SAS HDD's in RAID 0 of course. Shortly after I received the 73GB version of the drives I was going to use, I was a bit disappointed at the read and write performance, so I went back online and purchased 4 Seagate Cheetah's but one of the sellers dropped one and cancelled my order, so I only have 3 in route now and I guess I will stick with those for now. If I want more, I'll get the fourth later.
I received the screws for the heatsinks and modded them to fit yesterday. Unfortunately, it's a non-starter though :( It seems the EC3539s that I bought aren't 2P chips and, consequently, aren't supported by this board. All I got was a 1-5-2-1 beep code (no cpu detected). Oh well, off to find a workable pair of 2P chips. There seem to be plenty of L5520s available for not too horribly much more than the EC3539s.
--Matt
I'm sorry to hear it. I just looked it up and they are not 2p chips, Link. I know that you are aware now, but I just had to see if maybe it was possible. Anyways, I would like to see your modded heatsinks.
By the way, I have a Radeon HD 5770 GPU which was dead, it one day flickered on and off for a few hours and then it was gone. Now for the crazy good news, I ran across a forum post about broken GPU's and this guy was interestingly successful at bringing his card back to life by baking it for 10 minutes at 200 degrees Celsius in his oven!! Well, what did I have to lose? I did it and now it freaking works!! the only caveat to this is that it only works when the computer is turned on after a full shutdown, it doesn't work when I just reset my rig. It is a bit of a hassle, BUT, I have a working 5770!! I benched it, stressed it and left it on all day and so far, rock solid! I may just run the 5770 now instead of the firepro since the 5770 is far superior to the firepro v3900, spec wise. Good luck finding a pair of xeon's for your rig.
BTW, I found some for what I think is a good price on ebay, Link. These should be a really good replacement since they Turbo clock higher and they are hyperthreaded! 16 Threads!! :)
Haha, I'm a bit more excited about it now myself. HT and normal speed QPI sounds much better. All of the dual Nehalem Xeon systems I work with at work have three channels of ram populated per cpu. I'm interested in finding out how big of a difference having only two will make. I've had single core Nehalem i7 systems, but never with any less than all 3 channels populated.
--Matt
Does this count ?
I received everything that I needed to finish my rig, but it looks like this mobo is bad. Both CPUs work properly when in socket 1, but nothing works if either one is in either socket if 2 are installed. The diagnostic LEDs don't blink at all, it just powers up and sits there. I'm going to try to get a replacement from the ebay merchant if I can. It does run just fine with one installed though:
http://img11.imageshack.us/img11/1514/89620073.png
--Matt
Woot, I've been granted an RMA. Hopefully this'll all be sorted soon.
http://koti.mbnet.fi/ipe91/b/sisustuselementti/11.jpg
From top to bottom:
Dell C6100 XS23-TY3, 2 nodes. Each holding
-2x L5520
-48GB DDR3 ECC Reg
Empty SC723T-500B, painted pink by earlier owner :rotf:
HP MSA70, 25x2.5" SAS/SATA DAS
Dell Powervault MD1000 15x3.5" SAS/SATA DAS
These are not deployed yet! I ought to silence the MSA70 and MD1000 before using them.
Also they're bit tricky to fan swap as they got a lot of logic for fan detecting. So quite lot of oscilloscoping incoming and presumably custom electronics :D
Looking into getting 3rd C6100 node to function as Solaris server with L5630 and 8-16GBs of ECC DDR3.
My new little beast, HP DL160 G6.
2x L5520, 16GB DDR3 ECC and small 160GB SATA hdd.
Next steps: upgrade to 48GB and expand storage capacity + RAID controller and then move it to DC.
http://img15.imageshack.us/img15/7418/64627481.jpg
http://img838.imageshack.us/img838/5020/68007640.jpg
http://img845.imageshack.us/img845/9277/46625698.jpg
How audible are those gen 6 hps?
I had 2U gen7 hp server running on my desk here at work for a while, couldn't really hear it in a normal office environment.
I'm sorry to hear that... Well, atleast you were granted an RMA. Have you received it yet? I have mine up and running now. I ended up getting an SSD for my boot drive, OCZ vertex 2 240GB. I hope I have good luck this time around with SSD's. I also ordered a Radeon HD 7850 2GB graphics card. Sad thing is, is that by the time I'm finished with the mods, my CPU's, PCI-E slot, and Memory will be the only bottleneck, which means...................Motherboard upgrade after awhile. I am never satisfied, lol. BTW, I do have a newer board that has DDR3, 6 core, and PCI-E 2.0 (X16) support, BUT, that one is not as fun as my dually. :)
Got some new toys today.
http://img.tapatalk.com/d/13/04/19/6ese4uru.jpg
--Matt
Just slapped haphazardly together, but it's alive!
http://img.tapatalk.com/d/13/04/20/7uhabazy.jpg
--Matt
Here is mine so far. I have been a bit too lazy to finish the frame, but it can be hung on the wall now. :)
http://s20.postimg.org/a61ul8tj1/CAM00255.jpg