Results 1 to 22 of 22

Thread: New Rig for WCG/Fileserver purposes

  1. #1
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602

    New Rig for WCG/Fileserver purposes

    Hey folks!

    I was about to build a fileserver since everyone here keeps complainig about their HDDs being constantly full (including myself). I wanted to get 4 to 8 750GB Sata drives, get a decent Raidcon for a Raid 5 and put it in a cheepo Rebel9 Case on a cheepo C2D rig with some E2180, P35 and some DDR2 I have lying around. But it would be watercooled, because I was gonna put the HDDs in so-called quadboxes made by Watercool Germany (they absorb vibrations, noise, and provide great cooling for 4 drives each) and because I want it to be dead quiet.
    Now I have been thinking about using something more "serious" as I would like to have a crunching monster for WCG, too.

    So my eyes are set on a Dual harpertown now, having read here that they are very energy-efficient and rock in general

    So, I need assistance with the following points:

    1. Which mobo to get? I was thinking about the Asus DSEB-D16/SAS so I might even save myself the 350€ for an 8port SAS card
    Any other suggestions?
    Edit: hah, would have been to easy. Asus HP says I would need an extra addon card for raid 5... I bet it's gonna be expensive.
    Anyways, found another possible mobo for the purpose, since this Asus has "SSI EEB 3.61" form factor which seems to be very exotic. In fact, I only found one case supporting it and that sucked big time so:

    Tyan Tempest i5400XT, without the SAS controller it's only 335€ and seems to support all I need, 400FSB, 45nm quads and enough addon card slots, and its normal E-ATX

    Edit2: Crap, that one above only supports DDR2-667 and 333 FSB.. I'd have to get the S5397 for FSB400/DDR2-800 support. Costs another 40 bucks but what the heck. Greater problem is that I scanned the manual for these Tyans and, frustrating even though expected, couldn't find a jumper or bios setting for FSB So I'd have to mod the CPUs to get 400 FSB I guess...

    Edit3 (lol): Ha, found something. The Asus Z7S is standard ATX and supports overclocking in Bios! FSB, Vcore and even PLL voltage, if you can believe it. Anyone know when it's going to hit the market? I guess it should also be kinda cheap... AND I could use a smaller case. My favorite right now, except that it's not available anywhere

    2. Which CPUs to get. I thought about getting E5420's and setting them to 400FSB so I'd have 3Ghz - should be possible because the mobo supports 400Mhz FSB.
    But on the power saving aspect getting some L Harpertowns might also be interesting. They have a TDP of 50W as opposed to 80W, having 30W less per CPU will sure pay off in 24/7 operation. Am talking about those

    3. What memory to get, and how much? I really don't know anything about FBDimms, but I do know I don't need that much ram. For 400FSB I should get 800 rated FBDimms I guess, or can the 667's overclock easily like with normal DDR2?

    4. What case. Now this is gonna be difficult, since I will be watercooling the setup. I will need atleast 8 5 1/4" slots and space for a triple rad, pump and so on. Also has to be dead quiet like I said... Thinking about the TJ07 but not sure whether SSI mobo will fit.

    Will be running Windows XP x64 on it, as I am on my mainrig right now

    Now, I know this is not exactly the right forum for this kind of request, but I figured, since this will be mostly for WCG crunching it's all right

    Thanks in advance!
    Last edited by jcool; 02-21-2008 at 05:20 PM.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  2. #2
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Comments/ideas anyone?
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  3. #3
    version 2.0
    Join Date
    Feb 2005
    Location
    Flanders
    Posts
    3,862
    5400 board for sure. (with 400FSB support, some 5400 boards only do 333FSB officially). Tyan or Supermicro , both good choices, i think.

    Those L5420 cpu's are very nice ,only 50w TDP
    I would get those if you can afford it. Energy efficient also means less fan noise , and maybe 1 size smaller PSU.

    Memory : I would suggest FB 800 dimms , but those are still rare .It's not sure if 667 FB dimms will work with a cpu BSEL modded to 400FSB. Depends on the motherboard *I think* . S_B and MM know more about it.

    case? no idea , still looking myself

  4. #4
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    I can get a DDR2-800 2GB Kingston FB-Dimm here in Germany for under 100€ no problem
    1Gig sticks are also available for under 50, but I only see Kingston - seems they have the monopoly here for now.

    How is it with performance with these FB Dimms anyway? From what I understand, I have to populate 1, 2 or 4 channels. But do I get real performance improvements when running 1 or 2 channels compared to all 4?

    Question is whether these L5420's will hit 3Ghz with default (assumingly very low) Vcore.. I'm afraid the risk is a bit too high, since I haven't seen any Vcore mods via pinmod so far

    About the case, in the German HWluxx forums someone is using the above mentioned Asus DSEB (the one witht he strange form factor) in a LianLi V2000 no problem. I'm currently thinking about a lianli 343b cube or a Silverstone TJ07, both can fit E-ATX and my WC setup easily.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  5. #5
    version 2.0
    Join Date
    Feb 2005
    Location
    Flanders
    Posts
    3,862
    Don't know about the L5420 cpu's. If they're running 1.1vcore standard , 3Ghz is asking much. Worth a try anyway 2.5Ghz is not bad either

  6. #6
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    On the other hand, my Q6600 can do 3Ghz at 1,1V easily and it's 65nm, so it should be possible.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  7. #7
    XS_THE_MACHINE
    Join Date
    Nov 2004
    Location
    Long Island
    Posts
    4,678
    I was actually researching RAID options a couple of days ago and thought that some of the rarer raid options would probably be better option (Raid 50,60, or 01). Unfortunately, none of the supermicro boards seems to offer any of those, so you will need to buy a raid card in addition to the motherboard. CPU's and RAM I am not much help with. As for cases, get something with a lot of room. I would suggest maybe a stacker or something along those lines.
    "Victory is always possible for the person who refuses to stop fighting"

    clicks to save kids

  8. #8
    Xtreme Enthusiast
    Join Date
    Apr 2007
    Posts
    700
    Nice
    ʇɐɥʇ ǝʞıl pɐǝɥ ɹnoʎ ƃuıuɹnʇ ǝq ʇ,uop

  9. #9
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Very helpful Brian

    Anyway as for case I was thinking of getting a V343B maybe... it's got plenty of room for watercooling and 18 5 1/4" slots
    Plus it looks cool for a server, just rather expensive...
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  10. #10
    XS_THE_MACHINE
    Join Date
    Nov 2004
    Location
    Long Island
    Posts
    4,678
    What is the max amount of people you think will be accessing the server at one time? Like 2-3 or higher?
    "Victory is always possible for the person who refuses to stop fighting"

    clicks to save kids

  11. #11
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Not that many, max of 4-5 I'd say. Nothing the Supertrak EX8650 couldn't handle. My main concern is storage, but it's always nice to have the speed.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  12. #12
    Coat It with GOOOO
    Join Date
    Aug 2006
    Location
    Portland, OR
    Posts
    1,608
    hehe, i'm on the same page as you guys too (only still not quite dual socket). I have been ogling one of these
    http://www.highpoint-tech.com/USA/rr3520.htm
    Main-- i7-980x @ 4.5GHZ | Asus P6X58D-E | HD5850 @ 950core 1250mem | 2x160GB intel x25-m G2's |
    Wife-- i7-860 @ 3.5GHz | Gigabyte P55M-UD4 | HD5770 | 80GB Intel x25-m |
    HTPC1-- Q9450 | Asus P5E-VM | HD3450 | 1TB storage
    HTPC2-- QX9750 | Asus P5E-VM | 1TB storage |
    Car-- T7400 | Kontron mini-ITX board | 80GB Intel x25-m | Azunetech X-meridian for sound |


  13. #13
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Nice controller, has the exact same CPU/Ram as the Promise and should perform well accordingly. Only difference is that it can't handle SAS drives
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  14. #14
    Xtreme Enthusiast
    Join Date
    Dec 2002
    Posts
    758
    I'd do a cheap $60 p35 + $60 e2140, two cores are beneficial, but it's unecessary to overclock it, and preferably better not for greater stability(I'd be underclocking and undervolting for power savings), and "splurge" by filling it with 8gb of $125(us) generic ddr2. More cache, faster server, and less drive access = greater disk longevity. Of course, if you go for one of those server boards that need ecc fbdimms, your memory cost is 2x, but it does give you 8 and 16 dimm slots, which I'd completely fill with the cheapest 2gb dimms I could find.
    Open Solaris for zfs(which I'd configure with compression, less disk access = greater longevity & speed again), raidz, & dtrace. Configure with iscsi, etherboot, gpxe(you can boot xp, w2k, etc off of server and go diskless on your workstation.) If you're not familier with these, read up on them, you'll be salivating before you're through. No need for fretting over noise vs cooling trade off; build a regular system and stick in a closet, preferably in a room across the house connected by a long cat6a cable.
    Last edited by keiths; 02-23-2008 at 06:25 AM.

  15. #15
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Thanks for your input, but I've set my mind on that dual Harpertown now
    I really wanna have some firepower there, and a cheapo Dualcore isn't going to give me that. As for cooling, there is no alternative to watercooling for me, and I won't put it in a closet either because it will be looking great

    Very interested in your software opinions though. I'm not really into anything but windows, but running the server off a networked HDD sure sounds good.
    I didn't want to put the servers' OS onto the Raid array anyways, but I just thought about using a small 2,5" drive for that purpose.
    However I'm not so sure about WCG performance under Solaris (which this is really about, apart from the fileserving which probably my P3 500 could handle) - I heard BOINC performs best with 64bit windows and that's what I am using already.

    About the case, I think I've settled on a Yeon Yang cube. A good friend of mine wants to trade in his for a 100 bucks, and it'll be perfect, saves me lots of cutting and even some money compared to buying a new one.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  16. #16
    Xtreme Enthusiast
    Join Date
    Dec 2002
    Posts
    758
    The raid configured hdds would be plugged into the server with the memory, it's your workstation that would be diskless, that is, if you plan to have two systems. If your intent for this machine is it to be the same one you sit at, then the file server-diskless workstation setup isn't applicable.

  17. #17
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Hey,
    no, the Dual Harpertown rig is going to be a separate, new rig. Used mainly for lots of (fast) storage and of course WCG crunching.
    I wanted to build the raid 5 into my main system at first, but I figured it would not be advisable to have a server-class raid 5 setup coupled with a system that normally only runs raid 0 and uses extremely overclocked hardware with experimental cooling solutions (SS..)
    In the event of something burning or dying (not that unlikely with a 65nm quad over 4Ghz), I wouldn't have access to the data.

    My primary concern is to have a great "all-in-one" system that delivers high performance (per watt). Building costs are of secondary concern. I also don't want to get too many systems, I already have the mainrig, the HTPC, and my subnotebook. This server would be the fourth and final rig, therefore I need it to be truly awesome (if that makes any sense lol).

    So, like I said earlier, I was gonna set up the raid 5 with a max of 8 drives and run the servers' OS from a small, additional HDD. Nothing wrong with that, is there? Or can I run the OS from the raid 5 without loosing much performance?
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  18. #18
    Xtreme Enthusiast
    Join Date
    Dec 2002
    Posts
    758
    A boot drive is fine. As for raid5, raidz is way better. Here's an explanation of why: http://blogs.sun.com/bonwick/entry/raid_z
    And then there's all the features that ZFS has. Some roughly equivalent ZFS features can be found in Linux doing reiser4+lzo compression+lvm but it pales in comparision and their is no raidz linux contemporary. ZFS can be run on Linux with FUSE, but performance is horrible; FUSE is really for academic testing.

  19. #19
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Quote Originally Posted by keiths View Post
    Open Solaris for zfs(which I'd configure with compression, less disk access = greater longevity & speed again), raidz, & dtrace. Configure with iscsi, etherboot, gpxe(you can boot xp, w2k, etc off of server and go diskless on your workstation.)
    Okay... so in order to use raidZ, I would have to install Solaris OS onto the server and use zfs filesystem (instead of NTFS) rite? ...and then go from there with the programs you mentioned. RaidZ sounds great but Solaris is a bit much for a Windows-kid like me I'm afraid

    About the software you mentioned, is that all open-source (for free) or do I have to pay?
    Dou you think it's realistic I can get this to work properly by reading tutorials and doinf basic trial-and-error? I really don't know anything about software compiling and stuff, being just another hardware-addict

    So what'd you reckon... possible/worth learning?

    If so, more information/links to tutorials for noobs would be much appreciated
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  20. #20
    Xtreme Enthusiast
    Join Date
    Dec 2002
    Posts
    758
    It's open. They even have a free starter kit: http://get.opensolaris.org/
    If you want to try out solaris, linux, bsd, etc. without building a system, check out qemu and virtualbox. Qemu has a lot of images already prepared, and virtualbox has a more polished configuration process.
    qemu: http://fabrice.bellard.free.fr/qemu/
    virtualbox: http://www.virtualbox.org/
    images: http://www.oszoo.org/wiki/index.php/Main_Page
    here's an online trial; even has a solaris image: http://floz.v2.cs.unibo.it:8880/
    Last edited by keiths; 02-23-2008 at 10:50 AM.

  21. #21
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Thank you, I'll look into it.

    One other storage question, can I tell a raid5/Z/whatever array to shut all drives down when there has been no access for, say, one hour? Because there will be days or even a week without a single access I think, and shutting down the drives would save some power in 24/7 operation. Does that depend on the controller or can I just tell the OS to do the shutdown after X min of idle (like with single drives attached to the mobo) ?
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  22. #22
    Xtreme Enthusiast
    Join Date
    Dec 2002
    Posts
    758
    There's power management(http://www.sun.com/bigadmin/features...er_saving.jsp), but raid5/z works by striping data across drives in an array. You will pretty much need to not have accessed the array in that hour, which if you load up on system memory for drive cache and don't flush to disk in that time(you'll need to turn off cache flush: http://milek.blogspot.com/2008/02/25...-and-zfs.html), it's.. possible.
    Last edited by keiths; 02-24-2008 at 02:58 PM.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •