MMM
Page 1 of 8 1234 ... LastLast
Results 1 to 25 of 197

Thread: RAID5 file server build advice

  1. #1
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542

    Question RAID5 file server build advice

    Hi everyone, and Happy New Year!

    Don't give up on this post right away, please!
    I am making this post in need of some experienced advice.

    I would like to build a low power, SATA RAID5, file server using 3 or 4 1TB WD drives for a start.
    Its intended purpose will be to share and store lots of 5+GB video files, music, pics and other types of smaller files.
    Users around my house will stream those files to their PCs for viewing/listening and use server storage to edit save documents, and print them on a network printer.

    More info, my needs and wants:
    The server will have 5-6 users total, but rarely more than 3 connected at the same time mostly for light loads.
    Still undecided on the OS (any network performance caps using windoze/linux -btw. I know nothing about Linux)

    I want:
    - fast file copy to the server (I suffered enough with WHS 16 MB/sec upload transfers.)
    - faster transfers from server to wired clients (I'm getting 30MB/sec now on my WHS, I want more)
    - media streaming to PS3 and similar clients
    - ability to upgrade my array easily as possible (up to 8 disks total in server)
    - ability to transfer my array to a new mobo safely
    - use the server as a web host for family use
    - users to be able to run light applications on the server independently by using remote desktop or other
    - user friendly user control on server (file-shares and permissions)

    I do intend to use a dedicated SATA RAID card but this is also my main concern! PCIe vs PCI-X for up to 8 drives?
    I know that the more expensive cards will offload my cpu and do calculations faster but there are also cards with a processor that doesn't do all the calcs on the controller and dumps that on the CPU
    What are the controller types out there and what are the important differences between them?
    For my application would a expensive card be justified by something compared to cheaper ones?
    Is there a sweet spot I might take advantage of?
    And are there even any cheaper PCIe 8 port raid cards outhere?

    No Areca in Croatia btw, so...

    Next is the mobo debate.
    Will a normal C2D or AMD board be good for my build or should I better look at uni-cpu server ones (Supermicro, Tyan)and ECC rams?

    The last is the system drive.
    I planed on using my array to hold my C:\ system partition.
    However I noticed that on more serious setups they use a separate drive or even RAID0 or RAID1 as C: drives.
    Would I see a real life benefit of this for my planed usage?
    If there would be benefit could I use a old or 2.5" drive on on-board SATA and not damage the server performance?

  2. #2
    Xtreme Enthusiast
    Join Date
    Apr 2007
    Posts
    700
    Dobar Dan,

    You have one of those CD Shops at the side of the road in a Sea Container Don't you...


    Basics..

    Raid 5 Will not make it faster. Raid Zero will.

    If you make a Dual Raid Zero in Raid 1 you have a backup and it will run fast.

    The PCI Raid card was my suggestion, it will be the most expensive part, but it will alter performance greatly..

    With only 6 People, you should be able to run a RAID 5 but I prefer the speed of ZERO. It will be more expensive, because you should have a backup of your data... But then you should have one even if it's on a Raid 5 array.

    You don't need to get the 8 Port on 1 card. You can get 2 Four ports, there should not be a reduction in performance as long as the Array does not span two cards.

    I'll reply here again after I look into what I would build for that..
    ʇɐɥʇ ǝʞıl pɐǝɥ ɹnoʎ ƃuıuɹnʇ ǝq ʇ,uop

  3. #3
    Xtreme Cruncher
    Join Date
    Feb 2007
    Posts
    594
    Quote Originally Posted by Brian MP5T View Post
    Raid 5 Will not make it faster. Raid Zero will.
    What? access time may increase but the more disks you add the faster you can read/write compared to a single disk and since XS Janus wants to read/write huge files, access time doesn't really matter that much does it?



    Last edited by Xcel; 01-01-2008 at 05:31 AM.

  4. #4
    Xtreme Enthusiast
    Join Date
    Apr 2007
    Posts
    700
    Quote Originally Posted by Xcel View Post
    What? access time may increase but the more disks you add the faster you can read/write compared to a single disk and since XS Janus wants to read/write huge files, access time doesn't really matter that much does it?
    I suppose not,

    I have a mental block and am bias towards "Free Speed"

    The Access and performance of Raid Zero is staggering.
    ʇɐɥʇ ǝʞıl pɐǝɥ ɹnoʎ ƃuıuɹnʇ ǝq ʇ,uop

  5. #5
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    @ Brian: Dobar dan
    What did you mean by Sea Container? What is that...

    Back on topic:
    I need the safety of single drive failure I really think RAID5 is the way to go for me. movies and music ain't mission critical data, and the rest that is can always be backed up.

    I will be using a midi tower case with 8-9 drive bays when I find one. So that also makes raid 0+1 to big for that kind of case and to expensive for the drive number.

    What card to go for? PCI-X or PCIe?
    If I go PCI-X the cards are cheaper, but I must go Supermicro on the mobo then.

  6. #6
    Xtreme Enthusiast
    Join Date
    Apr 2007
    Posts
    700
    Quote Originally Posted by XS Janus View Post
    @ Brian: Dobar dan
    What did you mean by Sea Container? What is that...
    This...


    edited per request of the thread starter.
    Sorry if that upsets anyone.
    Last edited by Movieman; 01-02-2008 at 04:41 PM.
    ʇɐɥʇ ǝʞıl pɐǝɥ ɹnoʎ ƃuıuɹnʇ ǝq ʇ,uop

  7. #7
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542

    I guess that was a joke...

    Please resize your pic its a bit to big for OT.

  8. #8
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    No takers, ha?

    Anyone used a Promise SuperTrak EX8350, seems like and good choice for home file server.

  9. #9
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    just use the intel ich9 its ok its nothing special but its better than leacher card, have u looked at getting the samsung 1TB drives they read faster and have less platers so it spins up faster
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  10. #10
    Xtreme Member
    Join Date
    Dec 2007
    Location
    St. Louis
    Posts
    155
    RAID 5 does indeed give you increased speed...just not as fast as RAID0 but for serving files to 3 people who give a f*ck. Now, to buy a RAID card that can handle RAID 5 and 4+ drives is going to cost so I would personally just use a chipset with Intel RAID. The performance isn't the greatest but like I said...you're only serving files to 3 people and 4 drives in RAID 5 should be fast enough to saturate a 1GB network connection anyway. 4 Drives in RAID 0 would be bottlenecked by the network anyway so it's pointless. The Samsung F1 drives do look nice but the Western Digital drives are much lower power considering they are 5400RPM...thats up to you though.
    • Thermaltake Mozart-TX
    • Corsair 620HX
    • Gigabyte 965P-DQ6
    • Intel C2D E6400
    • 2 x 1GB G.Skill PC8000
    • eVGA 7800GT CO 256mb
    • 2 x 320GB Hitachi - RAID0
    • DD TDX - CPU
    • DD Maze4 - GPU
    • DD Laing D5
    • Black Ice 240mm RAD

  11. #11
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Happy New Year back at you!

    Unfortunately I don't have the time I would like to dedicate to this, so I'll try to be as helpful as I can in as little time as I have.

    1. Yes, a single processor Intel setup should be fine. Or AMD. Really, either will work. File serving is a fairly brainless operation, you could even get away with celerons if you wanted to save some cash.
    2. PCI-E or PCI-X would work equally well. Given that it's more common, PCI-E will probably be the way to go if you want an external RAID card.
    3. Yes, an add-on RAID card won't hurt performance... but if it's *only* being used to server a relatively low volume of video, you might as well try it on chipset RAID and see if you need a performance boost or not. Might save yourself a few hundred dollars.
    4. Why do you want to use 5400rpm drives for a file server? I understand the power draw is less, but the performance is too. That said, if you're streaming 3 videos at once you'll want to find the average size of each stream. If you're looking at, say, 100MBps each you'll have a hard time hitting that with 5400rpm drives.
    5. To prevent disk thrashing, it may actually be better to split into individual drives. Definitely something to consider if you find you need better performance.

    Sorry, I had more to say and more to elaborate on... but time ran out. Reply back with any questions, should have more time tomorrow to help.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  12. #12
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Just saw this thread. This is kind of similar to some of the things I'm doing here (same file sizes/client machines et al.) I mainly do DV & photo work where files are in the multi-GiB range being shared around.

    A couple items first:
    - seriously think about using other drives besides WD's (at least not their desktop lines). I have seen, and has been reported numerous times that WD just can't get raid to work worth a damn in multi-drive setups with reliability. Yes, you may be lucky. Yes, with various firmware flashing you may get by some hurdles but that is not something you really want to think about when you're in the multi-TiB ranges.

    - Figure out how large you expect to grow your array. (capacity planning 101, how much do you need _NOW_, how much do you need in 18 months? in 36 months? Then add about 50%. This would be a (simplistic) baseline that you can use for calculating purposes (internal/external chassis, raid card or whatever).

    - how long do you want to keep the current array/system without a forklift upgrade (ie, what is the expected service life? do you want to last 3 years? 5 years? what?) This will help you gauge what type of case & array layout when combined with the above capacity specs.

    - What is your availability requirement? (ie, do you plan on backing this up? how long would it take to restore? how long can you live with it being down?)


    Now on to the rest:

    - When copying data to/from your server assuming you'll be using SMB/CIFS protocols DO NOT expect more than ~50MiB/sec performance from each thread/client. I have not seen (and have seached/posted/asked MS and others, and have NEVER gotten more than that even when using 10GbE nics). Now this is per-thread. So basically your SERVER if you have a GbE or faster nic (or if you bond nics) can handle an aggregate more than that (ie, 2 clients at ~50MiB/s each (100MiB/s) can fit into a single 1GbE nic for example assuming you have a clean network).

    - Bonding nics will NOT increase per-thread throughput (it's link aggregation so a new stream based on source:dest IP pairs would go on a different physical link, so it helps for your server handling multiple clients not to the same client which would only be on one wire. If you wanted (and if you were to use something other than SMB) more speed you'd need a faster nic like a 10GbE one which is not cheap. (well the nics aren't bad for the CX4's but the switches are killers).

    - Upgrading as easily as possible this depends on the size/capacity you are looking at. You can buy like I did a multi-drive server chassis that you can plug in more drives. This works until you reach the limit of that chassis and then you have a forklift proposition in front of you. The most flexible means of storage expansion would be external JBOD SAS chassis w/ extenders and a SAS raid HBA. This lets you fill up a chassis with drives and add new chassis over time which your raid card can then expand the volumeset/raidset behind the scenes with only a reboot to take affect (after the data migration/resync which can run for some time depending on the size of the array). but no real down-time. This is not cheap. But this is exactly where I am today and with current utilization projections will be doing this in about 18 months.

    - Transferring your array from MB to MB or chassis to chassis. This screams hardware raid card. Areca's are good though not cheap. This is exactly why I have them in my systems here (well one of the reasons).

    - Runnning multiple applications on the server, personally I try to avoid this (too many eggs in one basket) approach but it's do-able. You generally have a higher risk of issues doing this as you have more things going on at once.


    As for the other items and some low-level stuff: It is not end-user friendly if you're not used to it (and it has a steep learning curve) but Linux & Samba for a server OS is very functional and you have a lot of added benefits with larger systems (ie, JFS filesystem for example which can grow dynamically, takes very little memory for checking and can expand up to 2PiB in a single volumespace while still having 4KiB blocksizes so you don't waste anything on smaller files), LVM functions for snapshots et al. On windows you'll need to create GPT partitions if you want >2TiB of space, and most likely a server OS (2003) which then will cost you more $$ for your applications (most application companies I've found charge $$ if you run them on a server OS).

    As for raid card type, where possible get PCIe over PCI-x. This is mainly for future proofing your solution more than anything. You won't really run into a speed issue unless you buy lower-end cards or much older HD's. PCI-X (133Mhz/64bit) slot can handle ~1GiB/sec. PCIe is ~250MiB/lane. So 4lane PCIe is ~1GiB/sec and PCIe is bi-directional whereas PCI-x is not.

    Basically any card that has it's own processor (say IOP3xx series, or the 3ware ones) will be in the >$400 range upwards to about $1500 or so. You won't find a true hardware raid card really for less unless it's ancient. If you can't get Areca then you can try Adaptec or LSI or 3Ware. After them take a look at Highpoint or Promise (lower end in the raid realm but still ok).

    As for RAID level. I've attached a excel spreadsheet to this that you can have fun with and run the #'s for reliability calcs. That's basically what it comes down to as to what level you want. If you have a hardware raid card the card won't be the problem really in saturating a Gbit link with any type of raid level. I would NOT suggest raid 0/1/1E/10 for this as either they are wasteful in space, or not as reliable as raid 3/5/6. Also would probably avoid multi-level raids (30/50/60) unless you are planning on a set array size (you can't expand those have to strike/reformat) and a large one at that.

    Also search is your friend here (and the raid sticky post). which covers a lot as well. I think I answered ?most? of the points.

    (location of raid spreadsheet opposed to old version)
    http://www.xtremesystems.org/forums/...9&postcount=52
    Last edited by stevecs; 01-26-2008 at 04:02 PM. Reason: updated location of spreadsheet

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  13. #13
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542

    Smile

    Thank you so much guys for all your inputs so far.


    @stevecs: I hope you wont have any medical problems with your fingers for making such a long post on my account.
    PS. I tried to download your attachment put it sais link is invalid. ??

    1. I definitely want a RAID card cause I don't wanna think about what happens when my mobo fails or worse, becomes unstable!

    The Promise SuperTrak EX8350 and the Areca 1220 have a great layout for my needs (or so I believe).
    Both are PCIe, (low-profile If needed), use regular SATA cables and have good access to their connection points.
    Areca I can't buy locally and that makes the expensive card even more expensive.
    Other vendors I looked at have better cards as PCI-X only(Adaptec, LSi), and SATA ones usually use add-on proprietary cables(3ware)
    Any critical lack of functionality or other reason why Promise card costs 150-200$ less than other choices?
    So far it seems like a smarter choice.

    2. I want RAID5 cause it will offer enough security for storing movies and other media.
    All this data is not really mission critical or irreplaceable. My business files and other family stuff i can backup separately on clients or dvd's. Down time is not really serious issue in my case, more of an inconvenience.
    It will protect my data however from a most frequent occurrence and that is single drive failure.
    I wanted go with only RAID5 in my system and divide it to C,D for boot and data.
    Is there a major down side for this?
    What happens to my array when I add a drive? Can I choose witch one to enlarge C: or D: drive?


    3. I will grow my array to 8 drives max on one card only when prices drop a little. Wasn't really thinking of going with complicated Raid levels cause they would be wasteful for my needs and as mentioned will limit the array expansion.

    4. It is interesting you mentioned /read/rite will be capped at 50MiB/sec per client on a windows platform. I read that in one of your posts on this forum before.
    Is that the way they designed the platform or is it just a bug that maybe has a chance of being corrected in Server 2008??
    I would consider going Linux but all that seems complicated to me as I never had any contact with it.
    Support is more accessible and familiar for windows users is it not?
    Does the Linux platform offer any more speed, and what bottlenecks those setups first?
    I could live with that cause 1gb network wont give me much more and now I'm getting 35read and 16write on MS WHS server.

    5. I wanted 1TB WD1000FYPS RE2 version of the GreenPower lineup drives. I already got one but was thinking of selling it cause I stupidly bought EACS version before I even had this idea. And lower performance wont be a problem for non demanding tasks at hand, and win throughput cap, no?
    WD and Samsung are the only options for me cause they both are low power, Seagate and Hitachi are real power-hogs. Any info on Samsungs Raid abilities, the aren't exactly known for disks.
    Any links or more info that would sway me from WD drives in Raid or learn more about them?
    I found the dropping drives problem on YS series, but that was fixed with an update, no? Any other issues? WD is a respectable company after all.

    Please stay with this thread for a while longer, I'm sure I'll think of something more to ask, thanks for all of your times!
    Last edited by XS Janus; 01-03-2008 at 02:29 PM.

  14. #14
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    About the attachment, that's strange, I just tried pulling it up and it works. Shoot me a PM w/ your e-mail address and I can send it to you directly if you're still having problems.

    #1: yes, the promise card looks comparable to the areca 1220 and has the same older IOP333 proc on it which still should be fine for most things. It really comes down to raid management functions on the cards. As for 3ware, they don't really use proprietary cables, just not ones that you normally see in the desktop world. They are standard sas multi-lane cables for most of them which is good as they clean up the cabling mess. As for cost, differences between promise/highpoint and the areca/lsi/3ware camps are generally raid management and performance designs. promise/highpoint generally are good but they have not (or at least I've not seen the results of it) put $$ into development of the raid management functions and higher-end performance. They generally release items after the bigger players have moved on to higher end components.

    #2 - Yes a single drive failure is (in hardware terms) the first item but there is a much higher chance that a 2nd or correlated failure will occur once one drive dies in an array. Generally it's about 10x more likely for a 2nd drive to die right after the first one goes (environmental issues, age, batch issues et al). As for putting C & D on the same array, it's do-able but not recommended as the I/O patterns are not going to be the same on your drives so your performance will take a hit. windows OS partitions have a very different I/O pattern than a media/data drive does (this is also why a lot of companies have moved back from the idea of 'booting off san's'. I would even say you probably don't want a raid-5 for your OS but would recommend a raid-1 for it. Remove the parity calcs for that raid (and you want it to be a separate volume set not the same one as your media (ie, two different 'drives' from the OS perspective at least). As for expanding I don't know about the promise (goes back to the management functions). Generally you increase your raid set first by adding new drives to it, once that's synced up then you can increase your volume set on that raidset. Now the ability to do this is highly dependent on your OS/filesystem to handle it. JFS & XFS under unix no problem as they are set up with block allocation groups for this purpose. I have not tried it under NTFS so I'll have to differ to someone else here it may be that you'll sacrifice inodes or need to change blocking to do it.

    #3 - 8 drives is decent, and with only 8 drives any raid level would be fine except raid 0 with any current drive technology.

    #4 - I don't think it was the way it was designed or at least not intentionally. Remember that SMB was designed what 15 years ago? Gbit speeds were a pipe dream back then, most of it was 10base2/token ring or if you were on the edge 10baseT for local LANs. I've tried Vista (which uses SMB v2 same as server 2008) and I did not notice any improvement in speed per thread. It could be due to other tcp/ip problems or whatever else but at least I have not seen any better transfer speeds. As for linux/unix offering more speed not really that would be much use for you. You can get some better speeds out of some of the other file sharing protocols but the problem is that they would only really be compatible with other linux/unix type clients. And then you may run into other issues (ie, NFS v4 the only one that supports say unicode and more optimized transfers (batching at the network layer) but is a pain to set up, v3 is much easier to set up but is about the same as SMB/CIFS and finding clients can be a problem. CODA is interesting but probably wouldn't rely on it. You could do iSCSI but that would be a seperate network buildout and then you still have the same problem on getting support for the various clients. I would say expect to be limited to approx 50MiB/sec per client machine. Which is not great, but still not bad as it's still 2x HDTV mpeg-2 stream requirements.

    #5 - the WD1000FYPS RE2 is definitely better than the desktop version and you may have better luck. Samsung is making some noise with their 333Gbit density drives (3 platter) which may be interesting but I haven't seen anyone use them yet in any of the data centers here that I have access to. The biggest problem with the WD drives is their tendency to drop off-line in raid setups. I personally had to replace about 200 drives of theirs for this reason, and I know that CERN had to do over 3000 drive updates for their WD drives as well (firmware and other issues causing dropouts) Even after updates you have to generally run the drives at SATA 150 and NCQ off to get them to stay on-line. Now, I'm NOT saying that this is indicative of a major problem. 200 or 3000 drives is NOTHING compared to how many they ship it's just a word of warning that at least in my 'small' world I have seen a disproportionate number of issues with their drives over Seagate/Fujitsu/Hitachi/IBM.
    Last edited by stevecs; 01-03-2008 at 03:21 PM.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  15. #15
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    I'm just going to let the other posters work on this as I'm still at work, but I do want to comment on this:

    Quote Originally Posted by XS Janus View Post
    1. I definitely want a RAID card cause I don't wanna think about what happens when my mobo fails or worse, becomes unstable!
    I don't know what you think a RAID card will do to help if your mobo starts to go. If your CPU is providing bad data to the RAID card, the RAID card will write bad data to the hard drives. There's no real saving of information here. The only way to protect yourself against corruption is to perform regular backups.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  16. #16
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Don't want to put words in his mouth but I took this as a means to replace the motherboard in a system if it fails and be able to take the card & array to plug them into another motherboard (ie, not using the on-board raid function which would not be portable to another system).

    You are correct though that in a situation that does not fail clean your data may be corrupt on the array itself or you may have some issues with your OS (if it's on the raid array) not liking the switch if you're doing a major change in underlaying hardware (different chipsets et al).

    As has been stated, RAID is not a backup and you should always have a good backup strategy in mind.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  17. #17
    Registered User
    Join Date
    Apr 2007
    Location
    DFW, TX
    Posts
    87
    My thoughts:

    • If you have a hardware RAID card, CPU won't be much of a concern assuming other things (i.e. network traffic) don't max it out.
    • There isn't much of a reason to spend a ton of money on a workstation board by Supermicro or Tyan unless you need PCI-X slots.
    • Go with a PCI-e card for the reason above.
    • My top 3 favorite RAID card manufacturers I've dealt with would be: 3ware (easily #1 by a large margin), Adaptec (watch out of their fakeraid crap cards though), and LSI (some of their BIOSes can be tricky if you want to make RAID 10 or 50). I've also dealt with Areca (1210 only) and Promise (only one or two cards, been a long time).


    With that in mind, I'd recommend the 3ware 9650SE series if you want a true hardware card. I just got my 9650SE-4LPML today in the mail


  18. #18
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542

    Smile

    Yes, I thought of the situation when a mobo dies. I understand bad things could happen to the array if it gets bad data as Serra mentioned.

    @stevecs: Did I understand you correctly or it was a typo when you said that even with a raid card I can expect problems when upgrading my system to newer generation one and trying to move the array to it?

    @Volkum and others: I looked into 3ware as it is available where I live, and actually it is my second choice.
    What I don't like about it is that I dont know how those multilane cables will work for me, and the positioning of the connectors on the back of the cards also has me woried about fiting all the stuff in a miditower case.
    It is probably faster in higher stress situations as stevecs mentioned due to better design, but I wonder if its worth spending >150$ more for performance that I won't see in my situation, being caped at cca. 50MiB/s in windows enviroment.
    What could convince me to get it, would be a list of the array menagment features that differentiates 3ware and Promise EX8350 card, as Stevecs mentioned, that along with internal design is what should be different.
    So does anyone has any info on different menagment features between Promise and 3ware?
    I tried, but am still not well versed in the Raid lingo and find it diffcult to tell apart what is important and whats not.
    PS: Volkum, when you get the chance, post more pics of what you got and how u planto use it, please

    The subject of using the RAID5 with my system partiton on it is something I thought of doing as it is difficult to find a smaller case with 8-9 exposed 5.25 drives.
    I will play with this options when i build the system to see if/and how much it will affect my performance.

    I don't think i will go RAID1 on system partition even if I choose to use a sepperate drive for it, as I can always reinstall it or load it from an image copy or something. I don't plan to use a lot of add-ons.

    But, I'm really interested to learn what are the benefits of doing a sepperate C: disk drive other than possible speed improvements and why would there be any?How would a choice of good 2.5" drive be for a system partition?
    -Will it be faster than c: on array, or maybe it would slow the whole process down and data resposivnes due to something else?
    Remember I can't expect very fast reads/writes for a single client, on this setup anyway, but other possible useful aspects of sepparate drive could be good to have.

    My regards!

  19. #19
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    - as for upgrading raid cards. Generally as long as you stay with the same manufacturer you should be fine, or at least I haven't really run into problems when going from one raid card from the same company to another from the same company. That is NOT a guarantee though as they could change chipset or other functions that would make it incompatible.

    As for the 3ware & multi-lane. The 3ware multi-lane cables generally are better for cable runs than discrete ones which is why they are preferred. I'm running the multi-lane version of the areca cards here as well and would well recommend in pretty much any situation to use them especially if you have/plan on more than a couple drives.

    As for performance. The ~50MiB/sec issue is on the network. You will want more speed than that easily for your drive array. Just to pull a number out of the air I would say you want at least 4x that on the system so you can handle other tasks that would be hitting your array (virus scans, backups, general processing so what you're doing won't affect other users who are hitting the array). And in your case that's ~50MiB PER CLIENT. So if you have two clients that's ~100MiB hitting your server. If you have 4 (and link aggregate multiple GbE connections you can have ~200MiB).

    A separate OS drive/array is beneficial due to the access patterns and I/O requirements of that drive (or any differing I/O requirement having it's own array/drive). Any drive has a limited # of operations/sec that it can handle. It's easily calculated (it's a field in the spreadsheet as well). That is your limited resource on a system. Taken with that is the request size and that pretty much gives you your I/O throughput (assuming it's random which for an OS generally it is and small request sizes). The problem here is that say a single drive gives you MAX 75 IOPS (average for 7200RPM 3.5" drives). Barring raid IOP write penalties this scales up with the more spindles you have. probably can be seen better with the following example:

    4 drives/7200rpm/raid0 say 75 IOPS/drive total 300IOPS
    with 16KiB access request size (typical of windows OS) = 4800KiB/sec or 4.68MiB/sec random

    Same drives for a movie share with large request size of say 64KiB (for network SMB transfers) =19200KiB/sec 18.75MiB/sec

    For local large file (say 1024KiB request sizes) = 300MiB/sec (basically what you'll see in say HDtach or similar).

    The point is that you don't get all of the above, you get 300IOPS. How you use them has dramatic effects on your performance.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  20. #20
    Registered User
    Join Date
    Apr 2007
    Location
    DFW, TX
    Posts
    87
    Janus,

    I just got my new Windows install up and running on my 4x raptor stripe with the new card. I haven't used the Promise management software before, but if you need any info on the 3ware management, I can definitely answer any questions you may have (I also have a pretty good knowledge of their CLI as well since I use it in production scripting at work).

    As far as how I plan to use it...nothing super special there. I just wanted 300gb of 10k RPM space on my gaming machine and seeing as how I got 3 of the drives (already had one from earlier) and the card for killer prices, it made sense to go with this setup.

    I can post pics tomorrow, but it's bed time now.

  21. #21
    I am Xtreme
    Join Date
    Oct 2004
    Location
    U.S.A.
    Posts
    4,743
    rebuild time on raid 5 sucks. I'd suggest using raid 6 or raid 10.
    http://www.pcguide.com/ref/hdd/perf/...els/single.htm


    Asus Z9PE-D8 WS with 64GB of registered ECC ram.|Dell 30" LCD 3008wfp:7970 video card

    LSI series raid controller
    SSDs: Crucial C300 256GB
    Standard drives: Seagate ST32000641AS & WD 1TB black
    OSes: Linux and Windows x64

  22. #22
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    I don't know where that site got it's information on raid-6 but they need some help data is not spread over more disks w/ raid-6 over raid-5. raid-6 is going to have a lower performance for both reads & writes than raid-5 and rebuilds will be slower as well. raid-6 is a dual-parity raid it provides for two failed disks instead of one. In order to do that it creates two parity blocks so the number of drives that hold non-parity information is n-2. With raid-5 that's n-1 as it has single parity. since raid-5 has more drives per stripe for the same number of disks (n-1 opposed to n-2 for raid-6) the process of reading the remaining data would be faster. Likewise since it has only one calculation to perform opposed to two in raid-6 it will also be faster. raid-6 is a reliability/availability benefit not a performance one over raid-5.

    As for raid-10 yes it has benefits of being faster. leaving aside the 50% efficiency of raid10, you're read speeds will be faster as there are two copies of data so the controller should pick the drive to read from that is able to respond the fastest to the request which would be faster than raid-3/5/6 which only has one source of raw data or it would in a failed situation need to read n-1 or n-2 drives and reconstruct. For writes raid-10 needs to write to both disks so write speed is limited to the speed of a single disk. raid-3/5/6 has a hit in the case that it writes less than one full stripe it has to write the data and then re-read all disks in the stripe and recalculate the parity and write it out. In the situation that you are writing a full stripe however it will be the same as raid-10 minus the calculation overhead for the parity.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  23. #23
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    For what scenarios is built in battery a good thing to have?
    How sensitive is RAID5?
    Is it more stable with a sepperate single OS drive?

  24. #24
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    By built-in battery I'm assuming you're talking about battery backup for the raid card? That's primary function is to allow you to turn on write caching on your array (which has large benefits even more so in raid3/5/6) and be able to sustain a power failure or unplanned system interruption. Without it you should only operate in write-through mode to your array to maintain integrity. It has no effect on raid reads. (which will always be cached).

    As for 'senitivity/stability' I don't really know what you mean. Sensitive to what? Stable how? Remember that the drives are the same so the only difference is how the data is being broken down and written. Or are you looking for reliability stats on the HBA?

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  25. #25
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    So with the battery I can turn write caching on.
    That is not the same thing as write back cache, which I read you can turn on and of on the controller like NCQ, or is it?

    You say large benefits for RAID5, what are they? more write speed, no bad data if system crashes? how important is it and for what usage patterns?

    By sensitive I meant, sensitive to data corruptions by various crashes, unplugs and other accidents?

    And in that aspect would the whole configuration more stable (concerning data errors) with a separate drive.

    What should I be affraid of (other than 2 drive failiures) that could damage my data partoaly or worse, crash my array?

Page 1 of 8 1234 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •