MMM
Page 2 of 8 FirstFirst 12345 ... LastLast
Results 26 to 50 of 197

Thread: RAID5 file server build advice

  1. #26
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    It's the same thing. With better raid cards generally (or should) operate in write-through mode. Ie. when a write request goes to the card, the card passes it to the drives and waits for the drive to tell it that is' written ok, and then passes that ok back to the OS. It's to insure that the data is written properly.

    With a write-back cache (what gets enabled with a batter backup) the card on behalf of the drive issues an 'ok' back to the OS WITHOUT writing it to disk right then. It usually waits until it can write an entire stripe at a time to avoid the write penalties. But in this situation you have a point of failure without the battery as you can loose up to the cache size of transactions if they are not committed.

    Write-back cache will benefit any raid 3/4/5/6 operation (well, raid 3 not much actually as that's a full stripe raid) but the others as by caching the data you have a much better chance to write an entire stripe width at the same time, and when you do that you do not have the 4 operation (raid 4/5) or 6 operation (raid 6) penalty and can do it all in 1 operation. It will come into play/use with nearly _any_ write operation on a raid except in the situation where you have 0% full stripe writes (ie updating less than a stripe width). (haven't seen anything like that but it is possible).

    Sensitivity is going to be the same if you're talking about external factors of power/crashes/et al. that has nothing really do to with RAID that's system & filesystem related not raid related. Remember raid's purpose is to provide availability for a hard drive failure. It does nothing at all for system stability (it's only concern is to replicate X block of data, not what's in that data block)

    As for data errors, there are numerous items that can cause them, and RAID does nothing to solve/mitigate them (not it's function).

    - Unconditioned power, You have the UBE/BER (Uncorrectable bit error rate) on the drives, somewhere around 1x10^15 bits.
    - The controller (HBA) itself ~300,000MTBF
    - cable transmission errors (unconfirmed 1Meter 3.0Gbps SATA/SAS cables of 1x10^12bits)
    - mainboard errors
    - cpu errors
    - memory errors at a similar 1x10^12bits (ibm did a study of 1 bit error per 1GB Ram/month).
    - Then you have background radiation (dependent on your environment mostly from Cerenkov radiation from what I've been reading).
    - Then you have filesystem implementation errors, OS errors, application errors which may write bad data on the system.

    I'm probably missing something but those are the main ones that I deal with/try to mitigate around here.

    As for being afraid of it, don't know about that. There are no real solutions to it. Sort of like life, you'll end up dead one way or another, is that a reason to worry about it?

    Seriously the biggest item barring filesystem, OS, applications (as I have no way to calculate those) would be Power issues (UPSs help mitigate) & HD failure far out in front (which raid mitigates). Then next would be BER rates (for very large arrays) or memory errors (especially if you use a lot of ram).

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  2. #27
    I am Xtreme
    Join Date
    Oct 2004
    Location
    U.S.A.
    Posts
    4,743
    Quote Originally Posted by XS Janus View Post
    For what scenarios is built in battery a good thing to have?
    How sensitive is RAID5?
    Is it more stable with a sepperate single OS drive?

    you first question is answered like this: BBU allows for write through caching. the power could go out and the BBU will save the data in the cache for 48-72hours depending on what type of memory the cache is. another thing which is nice is you can turn off the system with a BBU while your rebuilding your array and it will pick up where it left off( only tested with areca personally).

    second question: without using drives that support the write twice command ( seagate ES drives western digital re2 drives). I'd say very sensitive. I had trouble even with 3ware controllers using regular drives.


    third question: the answer is NO


    Asus Z9PE-D8 WS with 64GB of registered ECC ram.|Dell 30" LCD 3008wfp:7970 video card

    LSI series raid controller
    SSDs: Crucial C300 256GB
    Standard drives: Seagate ST32000641AS & WD 1TB black
    OSes: Linux and Windows x64

  3. #28
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    So wright-back caching can't be enabled on the array if you don't have BBU.
    It sounds odd to me... I guess I misunderstood you guys
    I remember reading a long thread about the Promise EX3850 card in general and the some of the guys there were complaining of poor write performance.
    Some solved their problems by disabling NCQ and enabling write-back cache by hand.
    But I don't remember them mentioning to have a BBU unit.
    I'm a little confused now.
    http://forums.storagereview.net/inde...c=20519&st=100

    On Raid cards is this option available in like a "manual override" maneuver when you don't have a BBU but when you do the card automatically sets it as ON?

  4. #29
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    You can force-enable it if you really want but then you disable all benefits of system stability as any type of OS or power interruption will corrupt your array.

    We're not saying that you can't do it at all, it does make the raid go faster. Just like jumping out of an airplane without a parashoot, you will hit the ground faster. Not a generally good idea though.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  5. #30
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    Gotcha!
    I understand now.

    I have a mATX Gigabyte G33 board for C2D sitting around. It has a PCIe x4. Will that board be OK, or is there a good reason to get a Supermicro or Tyan C2D or single CPU board while Im at it.

    I know someone mentioned that it'n not really worth it unless I need PCI-X in my system.
    Last edited by XS Janus; 01-07-2008 at 01:52 PM.

  6. #31
    Registered User
    Join Date
    Apr 2007
    Location
    DFW, TX
    Posts
    87
    A 4 lane slot should do you ok. There are a few cards that are 8 lane (the Areca 1220 for example), but you'll still have a good selection of 8 port cards that use 4 lanes. Definitely not worth the price hike of getting a workstation board.

    With regards to write caching: some cards will warn you that there isn't a BBU present, but I have yet to see one not allow you to enable write caching because one isn't present.

    IMO, you should have a UPS anyways.

  7. #32
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Very interesting thread I just found out.

    I hope I'm not hijacking the thread by asking the question I'm asking. If I am, please do tell me, I'll start another thread.

    Well, the thing is I'm starting to consider building a NAS box. Since it will basically be a huge repository of information (single point of failure until I backup, of course) without much strain, I was thinking about going software-RAID (pure software, not even ICHxR software).

    So, first of all, WHS (with that "special" partition style) or 2K3 (with software RAID 1 and 5)?

    Then, if W2K3, what are the odds of a RAID 5 data array surviving a motherboard death or a system migration? I've read a reinstall could mess up the array...

    Can a RAID-1 system array coexist with a RAID-5 data array?

    Is it possible to enlarge-rebuild a RAID-1 or RAID-5 array? (By changing each disk at a time, letting Windows rebuild the array at each disk substitution, and then telling it to enlarge the partition)

    Oh, one last thing. Single or dual-core? That's kind of a moot point, since E1xxx Cellys are just around the corner, but even so... RAID-5 XOR computing is hard, I don't want to be THAT limited...

    I'm not needing more than 50MB/s per thread, so that's not really an issue for me. Whan can I expect in terms of performance?

    Thank you very much for any help.

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  8. #33
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    Well, I will experiment how the WHS works with hardware RAID5 array for data drives.

    My guess Is that 2k3 would be safer thing to go cause it behaves like normal windows.
    I'm going WHS at firstly cause we like how the whole server can be managed from a cool console.
    It's great for managing users, shares, permissions, storage and website, all in one place.
    If there were such a well organised application for regular windows I would try it for sure.

    I don't think you can use WHS pooling file system and expand a drive on RAID5 array when it is already added to the storage pool.
    But I do believe it is possible to expand the array and asign new volume to the new free space and then add that "drive" to the WHS storage pool.
    Larger than 2TB volumes are also an issue for WHS, but there is a trick to use them. But you can always make smaller volumes and add them one by one to WHS.

    Expanding the array and even volumes can be done in regular 2k3 (especialy using the hardware card)

    This will be the first thing I'll try on WHS, actually.
    If expanding the array doesen't work in WHS at all I will drop it and go 2k3.

    You must remember that WHS firstly copies data on its primary physical drive and then redistributes it across the disks in the pool. Therefore a fast Primary drive is a good thing to have.
    I intend to go Raptor 150 for this purpose. (was thinking of 2x200gb 2.5" 7200 drives in RAID0 but was talked out of it cause of several things - see other thread here and one on 2cpu.com forum.)
    Size of the drive also matters cause WHS will partition 20GB for system files and remaining for temporary storage of files copied onto the server and files that you are moving around the shares.
    This is a limiting factor cause you can copy a chunk of data no larger than this disk space at one time.
    So for raptor this would be 146gb-20gb=126gb If you fill this up in one go it will say no more room and you will have to wait a bit, for WHS to move the files of the primary drive to the other drives.

    You could use RAID1 for system partition along with your RAID5 array and I would definitely go that route if I was to use software or motherboard based RAID.

    Software based Raid I dont know how this would work with WHS. I imagine recovering the System and all the shares would be quite a nightmare (RAID1 would help avoid this scenario somewhat)

    I will use some hardware based card for doing my RAID5 array just to be able to sleep at night in case of any software and hardware failures.
    I don't know if you could even use some software based raid on WHS or a raided system drive for that matter.

    I use a celeron 420 in my WHS box now, but it just uses 1x2.5" system drive and 1x1TB WD drive, and no software RAID.
    I will go E21xx for this new build.

  9. #34
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Wow, so much information. Let me try to digest it... hehe

    So, according to what I know about WHS and what you said, I'll either need to go fully hardware RAID, or I'll probably cause the OS a heart attack or two, since WHS is prepared only for that "sui generis" way of adding storage to the storage pool.

    Actually, this way of managing the storage pool (one big disk) is the main reason for me to like WHS... I'm usually more fond of (grammar alert?) W2K3, less chances of messing up something by tinkering with it... hehehe

    I guess I'm sticking with W2K3 or a "modded" XP Pro (to enable RAID1 and RAID5). The only thing missing is the info on software RAID by W2K3 surviving hardware changes (read: motherboard failures or system migration).

    As for the software RAID-5 migration, I know I can't add more drives after the volume is created (a hot spare would be nice, though... hehe) - any word on W2K8 about this, btw? -, my idea was like this:

    1) I want to change 5 80GB drives with 5 320GB drives, keeping all the data and volume;
    2) I remove one of the 80GB drives, change it with a 320GB drive, and let the volume rebuild itself;
    3) After the rebuild, I do 2) again, for another 80GB drive;
    4) I repeat 2) and 3) for the last drive;
    5) I use some kind of software (Paragon, perhaps?) to enlarge the dynamic volume to the full capacity of the 3x320GB drives.

    This was what I was referring to. Do you know if that is possible with software RAID?

    Cheers.

    Miguel


    P.S.: Janus, thank you for the time and help.
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  10. #35
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    Yes your upgrade plan sounds very nice.
    But I'm sure you cannot add a drive to the existing array that is any different than those already in the array.
    This fact will complicate things up for you.

    Software Raid surviving any crashes is a bigger gamble than hardware one that is a fact.
    I guess you could maximise your chance of success from that scenario by reconstructing the PC as close as possible to the original.
    ie. if your mobo goes, use the same OS you had installed before, using the same software used to create the raid... and so on even the mobo should be replaced with the same to reduce chanses of failing.

    The hardware raid tricks the WHS into thinking that the newly created volume is just another Physical drive.
    WHS doesn't support RAID, cause they intended it to be software JBOD with just a folder duplication option.
    And that's all fine and good but it's first gen software, and as such works pretty good. You just mustn't rape it.

    What I dont like is this: even though you have security for the shares you decided to duplicate if a drive fails you will loose other data. True, probably less important but still data.
    And the other thing is you can enable folder duplication on the shares themselves but not for specific folders inside you shares!
    And that's a problem if 90% of your files doesn't need duplication but is in that share. Mega inefficient!

    I'm also not fond of the storage pool idea.
    I was so because I like being in controll knowing "where" each file is But that is silly actually, cause on any RAID you dont "know" where your file actually is. All you should know is that is safely taken cared off.

    Thats what they -Microsoft- are trying to tell us: trust us!
    Although this is a nice idea and that they are the only choice, given everybodies past experience I would rather choose not to push their software to the limit and add another level of security to the whole story: hardware RAID5.

  11. #36
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    All of what you mentioned _IS_ possible with software (as technically hardware raid controllers run software called 'firmware' or 'microcode'). The question you're probably asking is does windows software raid handle this? Probably not though I have not tried it. Generally software solutions are no where near as robust as dedicated hardware cards.

    I have done what you mentioned there with Areca cards (among others) and sans in the past easily. You may be able to do it under windows but frankly you may want to save yourself some hassles (and get better performance) by spending the $$ and getting a dedicated raid card.

    If nothing else and if you have the time to play, try it (always do backups) and let us know how it works for you. I've always moved toward hardware solutions wherever I can only using simple software (ie mirroring et al) only in a pinch and when the budgets can't afford something better.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  12. #37
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    "Microsoft: Trust us"
    "User: Yeah, riiiight..."

    lol

    Ok, jokes aside, I actually like XP and W2K3 as they are right now. The number of crashes/lockups/general missbehaving I still have on XP and W2K3 are pretty much all because of failing hardware. Not so with my Vista laptop, that's a blody mess (it's running stock, except for RMClock, and it usually crashes at least half a dozen times a week...)

    But I digress.

    Thing is, AFAIK, software RAID on W2K3 is based not on volumes, but on partitions (or so I've read, please correct me if I'm wrong). Also, by definition, "simmetric" (sp?) RAID levels (basically, anything BUT JBOD, which is not even a real RAID level... lol) usually result in higher-capacity drives "shortening" to accomodate to the size of lower-capacity ones... Right?

    Well, the issue with failure resistance is what's bothering me... What's the use on having dynamic volumes if you can't move them from the machine where they are created? I'm guessing something is written on the disk and then another W2K3/W2K8 install can "pick that up", even if the controller changes...

    Unfortunately, I don't have the hardware available to test any of the two theories ("upgrading" by degrading - hehe - and dynamic volume migration). I only have a pair of 80GB drives ATM, and not even the same size (one is actually 82GB unformatted... go figure...). Is there anyone who can try this (like before putting everything on the hardware RAID controller, or before creating the array?)

    As for the WHS JBOD, thank you for reminding me that. I was thinking something different, but that makes a lot of sense. it seems very much a "tweaked" JBOD, and with shadow copies enabled. While we can't "tweak" software JBOD the same way WHS does on other Windows systems, we can use Shadow Copies... Hmmm... Nice! Very interesting indeed... I've got to think about that one.

    Ok, thank you very much. Anything else you can help me with, appreciated.

    Oh, I almost forgot. I read you're using a GA-G33M-DS2R (right on, right? ). Do keep in mind that, if you're considering using the ICH9R controller, some of the ports may be sharing bandwidth. I know that happens on ICH7 southbridges, and ICH8 also seems to "suffer" from the same "problem" (at least two ports on my P5B Deluxe are marked on different color, and the manual says they should be used for "secondary/optical" drives)

    Cheers.

    Miguel


    P.S.: Any reccomendation on stripe size vs. cluster size? Should the stripe size be something like "cluster size * number of drives in the array-1"?


    [EDIT]@stevecs: Well, budget and hardware availability ARE problems for me. That's allways the problem, I've lost coun on the interesting projects I couldn't even start due to lack of funding... But it's either the lack of hardware (like having to pay €120+ for a 90W PicoPSU+brick, it's just NOT happening) or plain lack of funds (I can't possibly justify a dedicated RAID-5, 4-port+ PCI-E card, plus a whole system to go with that, AND the HDDs... Upper management - read: parents - would basically kill me on site...). At least for now, software RAID is the only option, maybe with a little SATA 2-port add-on card...
    Last edited by __Miguel_; 01-28-2008 at 02:19 PM.
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  13. #38
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    @stevecs: I don't get your last post. Are you saying that it is quite possible to expand the array with larger drives than those already in the array by reblacing and rebuilding one after another and therefore in the end have more new disk space to use inwhen finished?

  14. #39
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    @Miguel - no problem, completely understand in which case you'll be able to put up with more hassles than a normal business/production user would. As for stripe sizes this comes up very often and there are no real easy answers as it depends on your workload, raid type, and other factors. First thing I would do would do is go to http://www.xtremesystems.org/forums/...=150176&page=2 and get the diskmon performance spreadsheet at least and get diskmon. run some captures and figure out what your workload type is. That would at least be a start. If you do a lot of writes, you generally don't want to do raid 3/4/5/6. It's very hard to tune a system with divergent workflows (ie bootable OS & data say on the same drives) in which case just use 64KiB as a catch-all. General tuning is much more complex and you'll need hardware generally.

    @XS Janus- Yes, I expand arrays by putting in larger drives (say 500GB upping to 1TB) or by taking a 10-drive array and adding drives to it to a 20+ drive array without a scratch/restore. Not that you shouldn't have backups just in case but this is common. The issues generally are with the OS/Filesystem but most if not all of the ones I use are unix based and no issues. Basically you just expand the array at the hardware level then you expand the volume (or logical volume) which is presented to the OS, then you have the filesystem expand to the new volume size. Generally you can do all of this on-line as well with no down-time (main reason for it) as when dealing with multi-TiB arrays a backup/scratch/redo/restore process is NOT generally acceptable in business (ie, it can take days. Try to tell that to your boss that your main X system is going to be down for several days. ain't going to happen. ) But these functions aren't generally on the lower-end cards.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  15. #40
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Well, with adequate firmware on the RAID controller and GUI manager, I've read about adding new drives to an already in-place volume, so changing drives with bigger ones doesn't seem all that far-fetched...

    What you need to remember is that, no matter the size of the new drives, you will only see the extra empty space after all the drives on the array have been migrated. Also, I'm guessing you'll need a partition manager to be able to expand the volume to fill the extra drive capacity, else you'll be stuck with extra space. Or you could create a new volume on the array.

    The thing is, Windows only sees the size the RAID controller tells it it's available. When you change all the drives, you'll have say an 160GB partition on a 640GB volume, which Windows thinks is a new disk. No biggie here, I think.

    The major problem is having software RAID to do the same thing... lol

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  16. #41
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    not really (if I'm understanding your terminology correctly). Basically the physical drives go into a 'raid set'. On that 'raid set' you then get to put any number of 'volume sets'. A volume set can be something simple like the entire raid-set as one partition or you can have say a volume set for raid-10, raid-5, raid-6 et al (on the same drives for the same raid set). Each volumeset (think of it like a drive as that's what the OS will see it as). When you add drives you have to add them to the raid-set first. In which case say you have a 10-drive raid-set with a single volume set of raid-5. You add a drive, the data you had on 10 disks gets re-distributed to all 11 drives so you have 1/11th of each drive 'free'. At this point you can assign that 'free' space to either a new volumeset or you can expand a current volumeset. When you expand a current one the OS would see it the same way as if you did a full-drive copy of a HD to a larger one (ie, space not utilized at the end of the drive where you can create another partition). Or under windows you can use Paragon or other partition tools to grow your ntfs partition to the larger size. Under unix it's similar though with a couple more knobs you can turn to make it more flexible (lvm) or easier (filesystem level 'growfs' type commands).

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  17. #42
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Sorry, steve, we posted at the same time, didn't read your post up until now.

    I must say we seem to be on the same page about hardware RAID capabilities. I might have expressed myself badly, but it seems we're saying the same thing. I just wanted to know if Windows would be able to see the growth on the RAID set (same number of drives, only different size; more drives is generally harder for a controller to handle, I think...). That would be just perfect, Paragon would take care of everything else...

    Oh, btw, I didn't get your first sentence on your post before the last. Something about hassles... I got confused there. Can you elaborate on that, please? Thanks in advance.

    Also, I'll check out your link. But I know the volume will be kind of a "WORM" volume (meaning, I'll dump the data there - vids, music, photos, etc. - and it will just be basically a file retrieving server).

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  18. #43
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    The hassle part I was basically meaning that what the $$ really buys is the saving of time/maint of a system for enterprise solutions. Basically enterprise solutions (and why they cost more) is generally because business tries to get everything down to the lowest tech level of support personnel. I.e. most companies that have SANS or even large arrays do not have the people who actually understand WHAT it is or how to configure/tune it. They may have been a helpdesk or general person before getting saddled with the job and don't have the training. (Not disparaging here against helpdesk et al people). With that in mind a lot of the configuration is set up so that 'it just works' with minimal trouble. It's not doing anything that is magic it just tries to mask a lot of it from the end user. When you don't have that (ie home user or don't have the $$) you have a lot more configuration to do and a higher level of understanding required as to what you are doing plus there generally is no safety nets. Just meaning that you may run into things (and learn more) that enterprises don't as you'll be filling the gap between the 'ease of use' and 'functional' with your own cuts & scrapes.

    As for the raid expansion, yes it does appear we're both on the same page.

    If you're doing it more of a long-term storage and mainly reads opposed to writes then a parity raid (3/4/5/6) really comes more into play due to the storage efficiencies. The write penalties of 4:1 for 3/4/5 and 6:1 for 6 get mitigated if you don't do many writes. reads are pretty good on those raids. It comes down really for that model is what level of storage availability do you need. (ie, how resilient of a system) and that's what the spreadsheet is designed to calculate out so you can make that judgment.

    Always remember though that RAID has nothing to do w/ data integrity it is purely a hardware availability solution. I.E. you still need backups.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  19. #44
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Ok, I got the "hassle" part now.

    I really don't mind learning. I love IT, especially networks, VoIP and storage, so any chance to learn is a good one. Unfortunately, the chances to learn sometimes can't happen due to the (chronical, I might add... lol) lack of funding. My last system dates back to 2006 (C2D-based), and until now I've had LOADS of ideas (self-contained, extra small, stackable mini routers; self-contained, small, stackable NAS devices; portable device updaters - like WSUS in a box... hehehe; merging my landline with my network, etc.), but zilch funding for them, so I'm kind of "asking for a Donuts" (portuguese TV commercial, where angry or frustrated people with the index finger raised saw Donuts droping from the sky to fit the finger... lol).

    The thing is, I'm actually starting to NEED a NAS... I have to circumvent this problem. Also, I need to start making money... Internship as a lawyer is hell in Portugal, you don't even make enough money to live on your own, let alone to be able to afford IT stuff...

    But i digress. As for data backup, I know it's still necessary. Good thing is, though I have had bad experiences with hardware, including HDDs, not once I needed a backup. Which is a good thing, I never actually backed up all my data, just the "if-this-disappears-I'll-kill-mylself" stuff

    Again, thank you for all the help.I'll check that data ASAP (at work now... )

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  20. #45
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    This is great news in my book!
    I was under the impressions that when you build an array you are stuck with the drives you have.
    This is great, wasn't even hoping for that. Dismissed the notion of that when I read that all drives should be the same when you first build it. Silly me!

    Now I must read all the stuff you guys wrote after explaining my last question.

    Oh, BTW I'm thinking of caning WHS on the new system.
    Been working on my excel files lately and some became unavailable. Not corrupt but rather "file path couldn't be found" or something in that sense.
    Investigations begin (and never stops in Microsoft software case )

  21. #46
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    @XS in some case you are, remember that not all cards do this. It's mainly the enterprise level cards so you're talking >$700+ bucks or more. Which that price can scare people. Look at the manuals before you buy. Also most HBA's have limits. Ie. Areca can grow only raid 10/3/5/6 arrays up to 32 drives. You can create a larger raid but it has to be a raid 30/50/60 (plaid) setup, and you can't grow plaids with areca. There are ways around this though (ie creating several single level raids and use OS LVM functions et al). But 90%+ of the people will never really hit that. In my case it's an issue as I know I'll hit 32drives in about 18months which is why I'm working on longer-term solutions (without having to buy a DMX or something. )


    I haven't used WHS at all but from what I've seen of others posting about it, I'm just scratching my head as to what nitch it was supposed to fill.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  22. #47
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    From posts like stevecs's it's clear we're in the right (xtreme) place...

    I mean, right now he's already managing much more storage capacity most of us will ever DREAM of, and yet he's already saying not even two years from now he'll have storage shortage problems... eeek...

    @Janus: last month or so a bug/problem with WHS was detected. It causes data corruption and disappearance. Until now, there has been no fix that I know of...

    I think it has something to do with the way WHS manages data - you don't have clusters in the usual sense of the word, you have "multi-file" related clusters, meaning if two files have identical clusters, only one cluster will be used, which is always hard to manage...

    Now, were do I find a third (preferrably free) 80GB drive? I need to try out W2K3 and RAID-5 management...

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  23. #48
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Ok, guys. Weekend started, and before I start working again (I don't even have time to play games anymore, I have to work on weekends too, I think I won't be able to buy AND use an 8800GT to go FullHD gaming on the 37'' LCD TV I recieved for Xmas... ), I managed to get hold of the answers I needed about Windows 2K/2K3 software RAID. Here it goes:

    Quote Originally Posted by Someone
    Yes, you can recover the array. But you will need to follow the steps to import dynamic disks into the LDM database.

    I believe the steps are in the Windows Server 2003 online help.
    Ok, one down. I just need to know what exactly are the steps to import dynamic disks into the LDM database (which I didn't even know existed OR what it was/is...) If you want to check that thread, go here.

    As for adding disks, it seems on W2K it was possible, with one caveat.
    Quote Originally Posted by Someone else
    As long as you're using Windows' software raid solution and each disk has been properly upgraded to a dynamic disk you merely need to install the drive(s) then add them to the array and expand the volume to the new drive(s) in Disk Management.
    Since the Disk Manager stayed basically the same for W2K3, and none of the steps described above seems out of line with W2K3 Disk Manager interface, it seems I'll be covered on this part too... Nice! At last, good news. This one is available [URL=http://www.petri.co.il/forums/showthread.php?t=448]here[/QUOTE].

    However, at least for 2K, you have to take this into account. Not really an issue, since you can bypass it either with third-party utilities or by creating the volume AFTER making the disks dynamic (not really a problem for a RAID-5 array). The only way I think that limitation could appear is on boot-drive mirror arrays, which, by definition, cannot be dynamic at install (or at least that doesn't happen many times...)

    Ok, now the only things left to know are 1) how much of a performance impact does one incurr (sp?) when using the "secondary" ports on ICH7 base at the same time as the "primary" ones... and 2) where can I find some kind of dirt-cheap 2-port PCI-E 1x controller here in Portugal (in case the performance hit is substantial...)

    Oh, right, I almost forgot. What does an E1200+1/2GB+945G-based (uATX or smaller addict here... lol) mobo sound to you guys for a fully-featured NAS like I want to build? Also, W2K, WXP (with the RAID-5 mod) or W2K3?

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  24. #49
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    For the windows software raid I'll let someone else answer as I am not that familiar with it.

    For the ICH7/ICH7R that has a 2GiB/s link to the northbridge max and controlls up to 6 PCIe lanes plus 4 3.0Gbit/sec SATA connections. So if you are looking for more than 4 drives then you really want to use the PCIe buss for them. With a note that everything on the SB is going to be capped by the 2GiB/sec max throughput. (ie, your PCI slots are 133MiB/sec, each PCIe lane is 250MiB/sec, et al. It all adds up.

    But with the rest of the specs (e1200 et al) you're going to run into a MIPS issue first. Trying to move data to/from the drive subsystem and to a network port, plus run the raid-whatever in software it's not going to be a screamer. Remember Amdahl's balanced system law "a system needs 1 bit of IO per second for each instruction per second" or ~ 8MIPS per MBps. It's just a rule of thumb but with the other design issues, what are you trying to reach in speeds is probably more accurate of a question.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  25. #50
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Ok, so it seems I've opened Pandora's box... lol

    I know NB-to-SB traffic is capped at 2GiB/s. I'm all too familiar with Intel chipset diagrams, unfortunately. The issue I am talking about is that, at least apparently, on ICH7 base (not the RAID variant) the "3Gbps each" does not aplly all that well... I remember using two drives on the same port pair on a ICH7-based mobo, and performance was rather "yucky" when moving files from drive to drive. Things were much smoother once I swapped the drive to another port... I'm affraid I might hit another wall like that if I load up all the ports.

    Ok, as for that MIPS rule of thumb, you're REALLY scaring me... A quick Wikipedia search reveals that an Athlon X2 3800+ is capable of ~14500 MIPS, with an FX-57 providing 12k. Even if the E1200 can only do the same 12k (the only Core-based CPUs on that chart are almost doubling the speed of the E1200, the X6800 rating at ~27k), that's around 1.5GBps throughput... I'm not counting on much more than 30MBps on the Ethernet side, 50MBps at much...

    I mean, this is a home system NAS, not a corporate file server... hehehe Can I get by with that? Or do I really need a Quad for that NAS?

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

Page 2 of 8 FirstFirst 12345 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •