MMM
Page 1 of 2 12 LastLast
Results 1 to 25 of 40

Thread: Project "The Wall" - dedicated i7 Cluster

  1. #1
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602

    Project "The Wall" - dedicated i7 Cluster

    Hey folks,

    I was waiting on whether FUGGER would dig up some working backups, but apparently that ain't happening any more, so here I go.. repost with some minor updates



    The idea here is to create the perfect array of crunchers, within a budget, main focus on high performance (obviously..) while maintaining low running costs (elec...) and also pleasing aesthetics, since in the end this will go on my living room wall (yeah, I know ) which is also the reason I named the project "The Wall".

    The nodes will be stacked up on top of each other, for starters I'll go for a 2 wide 2 high config. If those 4 nodes are finished, stage 1 will be complete. Ultimately I am thinking about a stage 2 featuring 9 nodes (3 wide 3 high), but that's somewhere in the more distant future I suppose.

    Now, let me show you what I got right now, it's all the parts I need for node #1:



    Pretty much the main parts... Next, some close-ups:





    Full HW list, though you may have guessed what it is already

    - Case: LianLi V350 µATX Aluminum Cube
    - CPU: Core i7 920 C0 for now, will be replaced with a W3520 D0 in a week
    - CPU Cooler: Scythe Zipang. It features 6 heatpipes and a 140mm (!) fan, definitely the biggest heatsink you can fit in that case. It'll blow the heat directly into the PSU which will then transfer it out the rear, meaning no heat should stay in the case for long. Distance CPU fan <-> PSU fan is like 1-2cm xD
    - Mobo: DFI X58 Lanparty JR, a real piece of art. 6 DIMM slots, SLI & Crossfire capability, decent cooling/VRM along with ALL the OC options you can dream of, Diag-LED etc, all slapped on a perfectly designed µATX PCB. Kudos to DFI for making it happen
    - VGA: a 55nm Palit/Gainward layout GTX260-216 running on GPUGrid (sadly it's a bad OCer, 650/1450 core/shader is all it'll do) - obviously not the card on the pic
    - Ram: Very old OCZ Reaper DDR3-1333 rated for CAS6-6-6 at 1,75V. I had them for like forever, had to swap them out because some Asus 790i didn't like them.. at all Initial tests show they run fine at 1500Mhz CAS7-7-7 with 1,6V
    - SSD: Yeah.. no HDDs for a node that state-of-the-art. It's a 32GB Patriot Warp V2 SSD I have no other use for right now (from an RMA, the other SSD from the Raid 0 array died)
    - Power: a Seasonic S12-II 430 unit will provide efficient and clean power

    The plan is to build 1 node per month, so I can start trying to form a cluster next month. I'm gonna need some help with that, since I have no clue how to do that yet. I am guessing I will need to do some kind of Linux l33t h4xXx0r install, but I'm counting on you guys to help me out when the time comes

    ---------------------------------------------------------------

    Node specs summary:

    PPD: 25k WCG at 3,4Ghz --> 30k with W3520
    Power draw: 190W headless, 300W with GPUGrid
    Space requirements: 262x279x373mm
    Noise level: below 22/25 dbA idle/load
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  2. #2
    of the Strawhat crew.
    Join Date
    Mar 2008
    Location
    West TN
    Posts
    1,646
    At first I thought that the SSD came with the PSU. You can imagine my thoughts...

    XtremeSystems BF3 Platoon - Any XS member is welcome.

  3. #3
    XS WCG Hamster Herder
    Join Date
    Mar 2007
    Location
    Pacific Northwest, USA
    Posts
    2,389
    I know way less than you do about clusters, so I'll ask an obvious noob question. Sorry.

    Is the advantage of doing this to create a set of machines that appear as one logical machine to WCG? Is greater crunching efficiency gained somehow? You certainly will rule the machine point producer ranking with it...

    Regards,
    Bob
    If You ain't Crunching, you ain't Xtreme enough. Go Here
    Help cure CANCER, MS, AIDS, and other diseases.
    ....and don't let your GPU sit there bored...Crunch or Fold on it!!
    Go Here, Or Here

  4. #4
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Quote Originally Posted by 123bob View Post
    I know way less than you do about clusters, so I'll ask an obvious noob question. Sorry.

    Is the advantage of doing this to create a set of machines that appear as one logical machine to WCG? Is greater crunching efficiency gained somehow? You certainly will rule the machine point producer ranking with it...

    Regards,
    Bob

    You can't know less than me about clusters, since I already know nothing about them
    Except maybe how to hook up the HW...

    Why I want to do this? Several reasons.. I've never done it before so I'd like to try, also opening up some taskmgr and seeing 32 cores would sure help make my day
    And then there's the coolness factor of running a cluster and also having 1 logical beast in the stats ofc

    There won't be any efficiency gains, in fact the goal here must be to avoid any efficiency losses compared to running all the nodes individually.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  5. #5
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    some where in Maine
    Posts
    757
    im liking the setup but in my case im looking for a more open box like the antec 300 to run 2 gpu s heat can be a factor in that little case also im thinking of putting in a server cpu like the E 5504 I7 it is 80 watts cpu so that should cut some heat down some i just need some memory suggestions. good luck on your build

  6. #6
    version 2.0
    Join Date
    Feb 2005
    Location
    Flanders
    Posts
    3,862
    Quote Originally Posted by shadowwind View Post
    im liking the setup but in my case im looking for a more open box like the antec 300 to run 2 gpu s heat can be a factor in that little case also im thinking of putting in a server cpu like the E 5504 I7 it is 80 watts cpu so that should cut some heat down some i just need some memory suggestions. good luck on your build
    That cpu is lacking HT. (and also 4MB L3 cache)
    The E5520 has HT and the full 8MB L3 cache.

  7. #7
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    some where in Maine
    Posts
    757
    thanks i just added the E5520 to the cart i just need to figure out what memory will work in that little DFI board. im a noob with Intel all my stuff is Amd.

  8. #8
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    The DFI takes any kind of DDR3, it is a standard desktop motherboard, after all.
    I wouldn't go with a E5520 in there by the way. The W3520 Xeon is a lot cheaper, will OC higher, or can OC + undervolt so you can get the TDP way down.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  9. #9
    Xtreme Cruncher
    Join Date
    Jan 2009
    Location
    Nashville
    Posts
    4,162
    I have done some cluster work. While I am not sure what you mean by cluster I can tell you the true definition of cluster requires the application to have been coded to run on a cluster. WCG is not. WCG is a grid.

    I was a Sr UNIX Systems Admin, Solaris, AIX, Linux, FreeBSD and SCO. I'll help where I can and can do some research for you if you define what exactly you want to accomplish. There are cluster like tools where you can issue commands on one machine that execute on others. But they are not a true cluster just control and management tools for multiple machines.

    My plans are to try and setup a group of diskless machines running Linux that are all controlled from one machine all running WCG. But that is still not a true cluster.

    Hope my explanation helps.

    I love the stacked cubes idea.

  10. #10
    Xtreme Member
    Join Date
    Mar 2009
    Location
    Liverpool
    Posts
    162
    I wondered where this had gone Nice idea, would look cool with lots of cubs in the corner of the room

    I can't help out with the cluster problems though, sorry

  11. #11
    -110c club
    Join Date
    May 2008
    Location
    by the LAMP!
    Posts
    553
    i found this it may be worth a read
    http://www.scl.ameslab.gov/Projects/...ara_intro.html
    I LOVE LAMP!

  12. #12
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Quote Originally Posted by PoppaGeek View Post
    I have done some cluster work. While I am not sure what you mean by cluster I can tell you the true definition of cluster requires the application to have been coded to run on a cluster. WCG is not. WCG is a grid.

    I was a Sr UNIX Systems Admin, Solaris, AIX, Linux, FreeBSD and SCO. I'll help where I can and can do some research for you if you define what exactly you want to accomplish. There are cluster like tools where you can issue commands on one machine that execute on others. But they are not a true cluster just control and management tools for multiple machines.

    My plans are to try and setup a group of diskless machines running Linux that are all controlled from one machine all running WCG. But that is still not a true cluster.

    Hope my explanation helps.

    I love the stacked cubes idea.
    Hehe, excuse my noobness with using the word "cluster" then
    What's the difference between cluster and grid computing anyway?

    What I want to "accomplish" is 4 identical cubes (=nodes) to be seen as one, unified computer (=cluster?). As in, I run one OS on all 4 simultaneously and therefore have 32 threads at my disposal from a single machine.
    The machines would be linked via a separate gigabit switch (fibre channel is way too expensive and probably way overkill for this as well).
    Now, that OS/Software would then obviously distribute the work to each machine, but the WCG client would think it runs 32 threads on one machine. Much like a HW raidcon that stripes data onto multiple disks, which the apps don't know of course.

    Make any sense to you?
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  13. #13
    Mr. Boardburner
    Join Date
    Jun 2005
    Location
    the Netherlands
    Posts
    5,340
    If you've got 2 RJ45 connectors on the boards, you could use both of them. Any decent OS will divide multiple streams of data over the two connections automatically. I guess clustering takes a good deal of bandwidth.

    OT (since this is WCG ): heard anything from CSX yet?
    Main rig:
    CPU: I7 920C0 @ 3.6Ghz (180*20)
    Mobo: DFI UT X58 T3eH8
    RAM: 12GB OCZ DDR3-1600 Platinum
    GPU/LCD: GeForce GTX280 + GeForce 8600GTS (Quad LCDs)
    Intel X25-M G2 80GB, 12TB storage
    PSU/Case: Corsair AX850, Silverstone TJ07

  14. #14
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    No, seems my contact is on vacation or something...
    Mini-DFI has only 1 lan port so I guess teaming won't be possible. From what I've read, it's not required anyways if all you do is distribute the work. However if you like render something you'll need a lot of bandwidth.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  15. #15
    Mr. Boardburner
    Join Date
    Jun 2005
    Location
    the Netherlands
    Posts
    5,340
    Quote Originally Posted by jcool View Post
    No, seems my contact is on vacation or something...
    Mini-DFI has only 1 lan port so I guess teaming won't be possible. From what I've read, it's not required anyways if all you do is distribute the work. However if you like render something you'll need a lot of bandwidth.
    Maybe toss in some cheap PCIe NICs? Oh well, you could always do that if you need more bandwidth.
    Main rig:
    CPU: I7 920C0 @ 3.6Ghz (180*20)
    Mobo: DFI UT X58 T3eH8
    RAM: 12GB OCZ DDR3-1600 Platinum
    GPU/LCD: GeForce GTX280 + GeForce 8600GTS (Quad LCDs)
    Intel X25-M G2 80GB, 12TB storage
    PSU/Case: Corsair AX850, Silverstone TJ07

  16. #16
    WCG Cruncher
    Join Date
    Jun 2008
    Location
    Grid Square EM58mh
    Posts
    957
    Quote Originally Posted by EvoCarlos View Post
    i found this it may be worth a read
    http://www.scl.ameslab.gov/Projects/...ara_intro.html
    yes, good link. I've been interested in clusters or "Beowulfs" for awhile but haven't been able to climb the Linux learning curve yet - due to lack of time - at least that's my excuse so far. I've been working with the Kubuntu distro, and have literally pallets of dell optiplex 240's, 260's, and 280's, in various states of functioning or completeness. Keep up the good work on your project. Hopefully I will be able to contribute substantially later.

    Distributed Computing: Making the world a better place, one work-unit at a time.

    http://www.worldcommunitygrid.org/index.jsp

  17. #17
    Xtreme Cruncher
    Join Date
    Jul 2006
    Posts
    1,374
    Grid computing is the ability to run multiple computations via network on many different computers. Each WU is it's own instance however. Cluster computing would be closer to running the F@H smp client across all the nodes in your cluster; rather than having instances, you have one instance of the program, with all the cores working together on that single computation (more or less).

    You can set up a cluster, but you won't gain any advantage (most likely) simply because you are running an instance on each core, as opposed to using multiple cores for each instance. Regardless, you can still set up a cluster and do that.
    Last edited by xVeinx; 06-01-2009 at 10:39 PM.

  18. #18
    Xtreme Cruncher
    Join Date
    Apr 2005
    Location
    TX, USA
    Posts
    898
    Good luck on your endeavors of making a cluster

    Personally, I've been meaning to get a diskless grid running for quite some time; I even had WCG crunch for a month or so on a diskless computer running Linux over NFS two years ago, but school/work pretty much stole all my attention so I haven't been able to get any further. But hopefully things go better this summer/fall, since I'm on co-op at IBM for a full 6-months this time and don't have to necessarily be in a rush to figure out/get my project done.

    The way I envision it is I'd run all of my nodes off of a base OS filesystem(read-only via unionFS, including all binaries to be shared), mount any temporary data directories as tmpFS, and have any persistent data(ie. boinc data) go to its own mount over NFS. It wasn't too terribly hard to get everything above in place to the level required for crunching, so the place where I'm looking at putting the most work is writing some Perl scripts(maybe some shell script/AWK as well for good measure :P) to automate adding/removing/configuring nodes, updating the OS filesystem, allowing for specifically tuned kernels, etc. Originally I was thinking about the issue of having 32-bit machines co-exist with 64-bit ones, but nowadays I don't run any machines that're 32-bit only, except my sammy which'll probably get firewall/remote access duty. With that problem gone I'd ideally like it such that I could even plug in a box into the subnet/VLAN and have it automate the configuration to get WCG running under my account without any interaction.

    There's also a Linux distribution(Ubuntu based -_-) for diskless crunching that was created by Dotsch on the BOINC forums, which can be found here. I will have to say that I haven't looked into it beyond quickly glancing through the website, since I feel that I started this project a good time before his release and I'd like to keep my ideas original, see where I can take it



  19. #19
    Xtreme Cruncher
    Join Date
    Jan 2009
    Location
    Nashville
    Posts
    4,162
    I've seen cluster used incorrectly in the past. What xVeinx wrote pretty much sums it up. Clusters are also called parallel computing which means computations are run in parallel on a cluster of machines. Whereas WCG uses each core to run one instance a cluster uses all cores for each instance. Instance=WU.

    Since I do not know for a fact how WCG determines how many cores a system has to determine how many instances to run I am not sure how it will run on a cluster. I think it will determine how many cores are available and then start that many WUs and run across the network on the nodes.

    If you want to do this for the learning experience or the cool factor, both legitimate reason IMO, then try it. Knoppix might be the easiest to try it and see if it is what you want to do. I looked at the link rcofell provided and I think I will give it a try. Running off a USB stick sounds interesting and cheap.

    I ran Distributed.net on a FreeBSD cluster of 6 Pentium Pro 180mz with a dual Pentium 166 as the proxy/gateway a long time ago. It was fun and I learned alot. Just be prepaired it will take some time to set up and get running correctly.

  20. #20
    L-l-look at you, hacker.
    Join Date
    Jun 2007
    Location
    Perth, Western Australia
    Posts
    4,644
    I think the term you're looking for here is a farm, not a cluster. Nevertheless, impressive amount of resources you'll have at your disposal, and I'm in awe at the amount of dough you've thrown into this project.
    Rig specs
    CPU: i7 5960X Mobo: Asus X99 Deluxe RAM: 4x4GB G.Skill DDR4-2400 CAS-15 VGA: 2x eVGA GTX680 Superclock PSU: Corsair AX1200

    Foundational Falsehoods of Creationism



  21. #21
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Thanks for all the input guys, I appreciate it

    Gonna check out that link rcofell posted, put I'll probably gonna have to ice this cluster (or whatever the word is ) business for a while, due to lack of time and also a 2nd node to experiment with.
    Seeing as I spent money that I didn't really have on a Dual Gainstown build, node #2 will probably be a while.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  22. #22
    Xtreme Cruncher
    Join Date
    Jun 2006
    Location
    Land o' 10,000 lakes
    Posts
    836
    Quote Originally Posted by PoppaGeek View Post
    I have done some cluster work. While I am not sure what you mean by cluster I can tell you the true definition of cluster requires the application to have been coded to run on a cluster. WCG is not. WCG is a grid.

    I was a Sr UNIX Systems Admin, Solaris, AIX, Linux, FreeBSD and SCO. I'll help where I can and can do some research for you if you define what exactly you want to accomplish. There are cluster like tools where you can issue commands on one machine that execute on others. But they are not a true cluster just control and management tools for multiple machines.

    My plans are to try and setup a group of diskless machines running Linux that are all controlled from one machine all running WCG. But that is still not a true cluster.

    Hope my explanation helps.

    I love the stacked cubes idea.
    Some of the research I have done on this points to the need for SSI (Single System Image) to work. Do you know anything about those kinds of setups?

    I'm almost always available on Steam to chat. Same username.

  23. #23
    Xtreme Cruncher
    Join Date
    Jan 2009
    Location
    Nashville
    Posts
    4,162
    Quote Originally Posted by jspace View Post
    Some of the research I have done on this points to the need for SSI (Single System Image) to work. Do you know anything about those kinds of setups?
    Running a common root filesystem with each instance having own home file system?

  24. #24
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Update, changed the 920 C0 for a w3520, batch 3848A352:



    Looks very good a first sight, however this thing is the most evil low vid CPU I have seen since my old B3 Xeon 3210.
    At that insanely low Vcore, it takes 260W BOINC loaded with no GPU, 380W with the GTX 260 running GPUgrid. Temps are insane too, like 85C on all cores, 90 while running prime. Still the Scythe Zipang, contact isn't bad, I checked it - however it seems stable so I'll let it run for now.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  25. #25
    Registered User
    Join Date
    Feb 2009
    Posts
    470
    well, jcool thats true a low vid!
    just a pity you cant fit a true in there, but nevertheless i like your idea and the style


    Tell it it's a :banana::banana::banana::banana::banana: and threaten it with replacement

    D_A on an UPS and life

Page 1 of 2 12 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •