Page 3 of 8 FirstFirst 123456 ... LastLast
Results 51 to 75 of 197

Thread: RAID5 file server build advice

  1. #51
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Hmm. I have a couple ICH7R systems here (P5W64WS Pro) but haven't really seen an issue going drive-drive on the on-board. Granted I don't use it much (and will be removing all drives from them going to raid cards shortly). It could be something like you said specific to the ICH7 though from the intel docs I don't see anything that stands out. Are you sure you were using the ports on the SB not the ones that may be on another chipset (ie, marvel?) Board manufacturers usually add a couple more ports to another chipset that also does network or other functions.

    As for the balanced system rule of thumb, it's only a computer design guideline. If the E1200 can do ~12K MIPS then that's fine basically means that you have the processing power to handle that if that was the only thing you were doing (which it's not you run the OS and raid software for example) but still that's not bad. Basically if you are looking for ~100Mbit connections you should be good. The question will come down to WRITES more often as that will be all software based and each write IO turns into 2 reads & 2 write IO's with parity calculations, though I don't see that being much of a problem for a pure NAS 'server' type build for only 100Mbit.

    Try it and report back. As engineers say there are two worlds "world as designed" and "world as built". ") If you have Gbit I would be curious to see how much it can push in either reads or writes (streaming & random).

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  2. #52
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Well, it has been some time since that happened, before the latest BIOS update, AND I was using a "mixed" SATA seting (one disk was SATA, the other was SATA2), which could be the the cause of the weirdnes...

    As for the controller, I am positive I was using ICH7, because that board doesn't have extra controllers (It was a 775i945GZ, by ASRock... hehehe).

    So, now I'm stumped on those hardware requirements... I've searched a little, and a comparison a few years ago between ICH7R and NVIDIA RAID controllers (which are basically software RAID controllers, meaning I'll probably get the same performance using W2K3 and dynamic disks) over on Tech Report almost gave me a heart attack... I mean, <10MBps writes with 3 or 4-disk RAID 5 arrays, on a P4EE? Sheesh! Even double that is still lousy...

    Now I'm sad... :'( I'll either have to live with lousy writes and somewhat safer files (reads are not that bad, up to ~50MBps), or ditching the whole RAID-5 idea, since I'm not getting money for a hardware RAID controller any time soon...

    I still want to try it, though... I might just get luck, and able to live with that performance. And I still can BSEL-mod that E1200... lol

    Thank you for your help, guys. I'm not sure if I'll be able to pull this one out, but I'll try. I'll keep in touch if/when I have news for you, ok?

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  3. #53
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    You'll probably get better performance out of windows raid than the on-board regardless. Also you can forgoe raid 3/4/5/6 entirely and just do a raid-10 which does not have the write penalties.

    Another item is how often are you writing to your array? If you can track your workload (I put a spreadsheet that uses diskmon in the raid sticky thread) you can see what ratios you have. Parity raids are good on reads, it's the writes that are a killer. If you're not doing many writes you shouldn't have much of a problem.

    Also remember that 'safer files' has really nothing to do with raid. Let me beat that dead horse some more. raid is for hardware availability, NOT data integrity.

    Yeah, please post when with results when you build it.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  4. #54
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    When I mean "safer" is just comparing (insert RAID level other than 0 or JBOD here) versus single drive without backup (my usual approach, except for VERY sensitive data... I'm not going to backup my MP3s, which I have the CDs to rip from if needed be, right?) safety.

    I think I've been very lucky up to this day. I've only managed to kill a 40GB 2.5'' drive (plugged it in the wrong way... ), without much in it, and now my laptop HDD has ~20GB worth of bad sectors (luckily I'm still within the warranty period...), so I really don't know what loosing HDD data really is... hehehe

    Of course there is no extra safety from viruses/trojans/general malware/user stupidity (which I believe is just another form of malware... lol)/catastrofic hardware failures (that is, the whole array dying) with RAID, but it IS nice to know if something "light" happens (like loosing a drive), you only need to add another one, and you're set to go...

    As for reads/writes, from what I'm conjecturing, I'll need to dump my library to the disk/array, and appart from the ocasional timestamp change, file rename/update, it will pretty much be a ROM drive. Each user takes care of its user files and backups (most of them won't even be able to write anything to the share, so...).

    Perhaps I'm just thinking waaaay over my head. I'll probably be happy enough with JBOD, Drive Extender (albeits its caveats), or even plain old single disk partitions, since 1+0 or 0+1 is simply too much waste, even though it has many benefits on CPU load... Oh, decisions, decisions...

    First, though, is getting the drives AND the rest of the hardware, especially with a BSEL-mod-friendly mATX board... Any thoughts on that?

    Cheers.

    Miguel
    Last edited by __Miguel_; 02-02-2008 at 10:52 AM.
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  5. #55
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    And you can turn off access time stamps on the files. Which would avoid you turning a read into a read & write request.

    I would probably start w/ a raid-5 as it's better than a jbod/raid0 config for availability. From what it sounds like you're doing mainly reads so if you turn off access time updates that will help but you should be good for a small system like that. It's hard to tell without modeling real data and your real workload. Basically if you can keep it to reads parity raid is good.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  6. #56
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Ok, got it. Thanks.

    While figuring out how to disable access time stamps, I found out other tweaks for HDD use, mainly Kernel Paging and Cache Tunning... I don't know how well those could work on that array, but since I'll need every bit of performance I can get, do you think it will do any good?

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  7. #57
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    kernel paging would be something to disable if you have enough memory in your system (say >=2GB). As for cache tuning yes it could but I would hold off until you get the array and get some real figures. That's read cache which you probably won't really need with your stated 100Mbit bandwith. This is a good time to mention Amdahl's argument. (http://en.wikipedia.org/wiki/Amdahl's_law)

    Which granted around here we are very obsessive, and seem to throw the law of diminishing returns to the winds.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  8. #58
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Check for kernel paging and law of diminishing results.

    Not check for Amdahl's law, waaaay too complex for me, I'm not really good with numbers. I think I got the basics of it, tough.

    Also NOT check on the 100Mbps bandwidth. At first, yes, but I'd like to go 1Gbps as soon as I can get a compatible switch. granted, write performance WILL be hedious, but read performance should be 200+Mbps (which is what I'm accostumed to in Gbps links, really). Anyway, it doesn't matter. First I have to build the thing... lol

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  9. #59
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Small update:

    Being the thoroughly stubborn SOB I am , I decided to dig a little more on host-based RAID5 performance (a.k.a. Fake RAID). While W2K/2K3 performance is nowhere to be found, my guess is, since it will be using the same resources as the FakeRAID controllers, performance should be just about the same, perhaps sometimes even a litlle bit better (since there is one less driver layer to work with...).

    That being said, I found out that a single-core Opteron 148 (here) managed to go as high as ~125MBps sequential, ~10MBps random read/write on a RocketRAID 2200 card (here, not bad considering the same controller only managed 18MBps random read/write on a RAID-0 array. My guess is I should expect something around those ballpark figures in W2K3 and similar configurations, right?

    I also found out another review on the RR2300, with interesting results (here for relevant results), and yet another one (here), this time with the ICH7R in the mix, which should be more or less what I'll be able to get from the ICH7+software RAID (right?).

    The odd thing is, unless these controller cards/ICHs have something that's helping the actual reading and writing of data to and from the disks (not talking about the XOR engine, which all lack), the CPU usage is incredibly low... On the last review I linked to, maximum CPU usage was below 3&#37;... on READS! Writes were even lower....

    Any coments? That's weird, I always though the CPU would be fighting with the XOR operations... Now it seems the bottleneck is somewhere else... I mean, if top-of-the-line single-core ~500MHz dedicated XOR engines are capable of 800+MBps, then if most of a general-purpose dual-core 2GHz+ CPU were to be used for that, one should be talking about at least 500+MBps throughput... Hell, if ~35MBps random writes only takes ~3% of a 955EE CPU, that CPU alone should be able to output at least 600MBps random writes, and still have room to do everything else the system needs...

    I'm oficially ...

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  10. #60
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    As for those reports, take them with a grain a salt. It looks like they were only testing the outer tracks of the disk which means that those numbers would only be good for the first say 10&#37; of the drive. the other 90% would be progressively worse. Next the streaming read/write tests you can pretty much ignore. Only way you would see that is if you used all the drives (100%) for say HD video capture, no os or anything else on them. And were defragmented.

    The general performance one actually is interesting mainly the raid-0 as they got ~18MiB/sec at 64KiB size. Generally a drive will have 75 IOPS. So 75*4*64Kib/1024=18.75MiB/sec max (random). Now they are doing 50/50 random sequential so it should be higher than that but they didn't say anything about how many blocks they do in sequence for the sequential reads. The more blocks they do the worse the number looks.

    The raid-5 is actually a little too good, they must be doing some caching even with a 50/50 ratio and with 67% reads it should be lower (around 4.5MiB/sec random) for those drives.

    As for the overclockers site, there's a lot missing, enough to ignore the data. Ie. What's the stripe size? Their test machine was 2GB of ram, what was the test size to disk for each section? Random access, what part of the disk or over the whole array? Sequential, what was the average request size? Over the entire array?

    Basically those two reports suffer the same problems that I see here often. Trying to put a number to something without defining all the elements that go into it so it can be reproduced and second running benchmarks that have no bearing on actual workload.

    There are uncountable bottlenecks in a system (there's a reason why there are not many computer architects around, it's NOT an easy field). The big problem here is you need to come up with a specified goal in performance that you want then you design towards it.

    If you want good windows performance for apps/OS then you use the first ~10% of each HD. And use HD's with the least number of cylinders (one being best to cut down latency). Ideally you would then just not use the other 90% of the drive at all (don't even partition/format it). This keeps your armature in just that one section which will increase your IOPS. Wasteful of space, yes, better performance, most definitely.

    data storage on it's own set of spindles, great. Doing the above using ~10% and limited cylinders also great. Bad for space, great for performance.

    If you are looking for only 25-30MiB/sec for network performance you don't need to do the above pretty much any raid-1 could do that for you with current hardware. If you're looking for a guaranteed 100+MiB/sec (full GbE) then you have to start the tuning like above unless you know your dataset/workload and cut some corners. Ie. If you KNOW your doing sequential 100% and you know your average block request size you can get by with less drives. If it's random or partially random then that goes out the window you have to add drives. If you can't add drives and you need faster speeds then you only use the first part of the drives and save arm movement.

    It's all a trade-off game. Only problem is that it's an expensive game.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  11. #61
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    The more I post in this thread, the more I learn about storage subsystems, and how they work. Even if I can't manage to get "upper management" approval on the NAS, I'll consider myself lucky just for being able to learn all this stuff.

    So, having those reports "filtered" by someone who actually knows something about how HDDs work, it basically comes down to something like this:

    1) If I want full Gigabit (not happening), I'll need both a RAID controller AND a lot of disks, even more of each if I want that performance on RAID-5 arrays (and then I'll probably be CPU-limited... lol)

    2) If i want "consumer Gigabit" speeds, a RAID controller is still recommended, and I'll be on the low side (like high 10/100 speeds... lol) if I need random writes on software RAID-5...

    Also, while searching for my ideal stripe size, I found this, which will basically dictate my stripe size, since I'll be doing most (if not all) of my reads and writes through SMB.

    Ok, I think I finally got this thing kind of sorted out. Only the hardware missing now... lol

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  12. #62
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    A raid controller basically buys you ease of maintenance; offload of cpu for operations (saves cycles for applications); (arguably) better raid integrity due to it being customed for that funcion; (sometimes) portability & growth. Generally with modern CPU's you may get better performance but only if it's dedicated and you have a lot of cycles/fast pathing to the drives.

    As for Gbit ethernet, you can get that if you have multiple streams but any one stream (with smb/cifs) you are pretty much only getting ~40-50MiB/sec each so you'll need at a minimum of 2 client machines to saturate your server (usually more). And yes you want a good ethernet card not the ones that come on commodity MB's. Intel dedicated cards have performed the best in my experience.

    As for that link, yes that will help however that has nothing to do really with your stripe size. That's your network block/buffer size and yes setting that to 64k is good as well as setting your window sizes.

    You really need to run a diskmon or similar to find out what you are really requesting of your disk subsystem to get an accurate picture of what your workload is and you base your stripe off that. Generally 64k is picked as an all around average for OK performance.

    If you do a decent # of writes then yes, avoid raid 3/4/5/6. Instead use something like 1/10/100/101 or whatever for best. RAID 3/4/5/6 were designed to solve mainly the storage efficiency 'problem' with raid 1/10 not a performance solution. RAID-6 being a kind of exception as it also was for storage efficiency but also to solve the issue of MTTR (failure during a raid rebuild process).

    As has been said, you should invert your design. Figure out the performance you need and then design backward to the drives not the other way around.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  13. #63
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    I am waiting for some availability and price deals on 3ware 9650SE-8LPML or Areca ARC-1220.

    But I am still torn between the two.
    3ware is available locally so that is definitely a plus.
    Also it has no whine-prone small fan, it is PCIe 4x so I know it will go in Gigabyte G33-DS2.

    Areca is more popular now and support would be easy to find on the forums.
    I know it is also better performer, but I don't know should this be a deciding factor for NAS server with 8 drives and not many writes.
    Also would the ARECA work OK in a 4x PCIe?

    Do those two RAID card have any issues with working in PEG intended slots?
    But BBU module for ARECA is pretty hard to find. (areca websites are NOT working for me today )
    Should I even go for BBU or just enable write back cache on just a UPS with an automated windows shutdown when the power fails?
    Remember, this is a home server so when power fails nobody is writing anything to it.
    Is write-back option available on those two cards if you dont have a BBU connected?

    Any info much appreciated.
    I know a lot of guys use those cards so someone must know something.

  14. #64
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Areca's will work in a 4x slot assuming you're talking about a 4x wired slot (ie physically a 16x size but only wired for 4x?) I have a 1210 here in a 4x slot (physical 16x) no problem, and I've used the 1280ML in a 4x in a pinch when I was copying data around (also physical 16x).

    Don't know about working in a PEG slot at all.

    As for the BBU, the BBU is completely different than a UPS for the system. If you're going to enable write-back cache I would say yes get a BBU regardless of whatever other types of power conditioning/backup you have. the BBU is for outstanding transactions which is more than just a power loss. It can be a windows crash or a host of other items.

    You can manually force-on write-back cache without a BBU however this is not something anyone I know of would ever suggest. As I mentioned elsewhere it's like jumping out of an airplane without a parashoot. You'll hit the ground faster but it's not a safe thing to do for your health.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  15. #65
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Beginning from the end, as you said (lol), the 4x slot Janus is talking about is actually a 4x physical AND electrical slot, a bonus Gigabyte has been putting in some of their recent motherboards (mostly 945G and G3x-based).

    Great thing is, though that 4x approach has been made to still be able to have trace space for the rest of the motherboard (we're talking mATX here...), the slot is open-ended, so you can also use longer cards (be it a second PEG card for Crossfire or a longer controller).

    Oh, and Janus, PCI-E, by standard, says cards shouldn't even need as many electrical lines as their own. If that controller card doesn't work with the DS2/DS2R, it might be poor PCI-E implementation on Gigabyte's end...

    Then, I'll have to go about that triple-layer RAID array you're considering. That's just insane (ly complicated, expensive and whatnot). But I'm guessing that, if properly configured, you can get a LOT of power, security AND performance from that...

    Finally, I'll have to do the maths on the performance. Honestly, full Gbps speed will NOT be a goal. Waaaay overkill for my needs. Probably max 3x10MBps read tops, and never over 10 simultaneous connections.

    Ok, I must have been half asleep when I wrote the whole 64K SMB frames = stripe size... I completely forgot about about a bunch of stuff... I'll have to do the maths there too, but please tell me: how is stripe size (i.e., the minimum ammount of space any given piece of information will use up on each disk of the array) related to cluster size (i.e., the minimum ammount of space any given information will use up on the volume)?

    Should I select my stripe size as something like "cluster size/n-1", where "n" is the amount of disks on the array? If I'm thinking straight, then this would yeld both the maximum performance AND minimum disk waste, right?

    Now, where did I leave my calculator? hehe

    Cheers.

    Miguel


    P.S.: Again, XS new post alert failed on me... I must be doing something wrong...
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  16. #66
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    Quote Originally Posted by __Miguel_ View Post

    Then, I'll have to go about that triple-layer RAID array you're considering. That's just insane (ly complicated, expensive and whatnot). But I'm guessing that, if properly configured, you can get a LOT of power, security AND performance from that...
    I don't really understand what you mean. Can you explain?

  17. #67
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    To put it simply stripe size & cluster sizes don't have a relationship. A cluster (or any filesystem structure) is nothing more than a data block (i/o request) to a disk subsystem, it doesn't know if it's an application or a filesystem both are the same (well barring ZFS possibly as it's also doing the raid). You have no 'waste' from a stripe-size point of view that's pure filesystem layer not disk subsystem layer.

    A couple things about stripe size. A stripe as you mentioned is only the largest contiguous chunk of data per disk. It has no real meaning w/ raid-1 arrays only stripped arrays (0/3/4/5/6). When reading you are NOT required to read a full stripe at all. (any portion even only 1 block). However (with parity raids) you must write an entire stripe and re-calc the parity. So stripes sizes generally have more impact on writes than on reads.

    Basically the only thing a drive (or controller) sees from your computer is 'read/write X blocks starting at location Y'. What you want at any time is to have as many spindles active as possible. So if your 80&#37; rule workload (ie, blocks requested) is say 64 sequential blocks than you want to have a stripe size of 32k (each block/sector is 512 bytes). This is also where the user mythos of raid03/4/5/6 do not perform well over say a single drive comes in. The above example is good if you have an request size of 64 blocks, but if you DON'T have enough requests outstanding you will NOT have the queue saturation to hit multiple spindles. (ie, you may want say 64blocks, which say may take for a 7200rpm drive 13ms to transfer to the OS. If you don't have enough requests happening either 1) it's slow enough that a single drive can service the request and complete it before the next one comes in, or 2) you have a request size smaller than your stripe width so you are not hitting all your spindles). In these cases you won't really see a big improvement. There is nothing wrong with the raid, it's just not set to match your workload.

    Another item to be careful of is this may imply that more (heavier workloads) are better. To a point. Generally keeping your raid sized so it's below say 60% utilization on a single disk is where you want to be at. More than that drives take longer to service a request which then acts to retard overall performance.

    Generally your service time is:
    disk service time = Seek Time + rotational delay + (blocksize/throughput)

    But I think I'm wandering. Anyway, You want to first figure out your workload (what type (random/sequential) and average request size of your system. Diskmon for windows is good for that. Run it and put the output in the spreadsheet I posted in the raid sticky thread. It's set up so you can run several captures and compare them (ie, different applications or whatever).

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  18. #68
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Quote Originally Posted by XS Janus View Post
    I don't really understand what you mean. Can you explain?
    Sorry, Janus, that part was not for you. That was directed to stevecs, but it came out VERY wrong because I added your reply after I made that part... Sorry.

    Btw, unless I got that very wrong, stevecs was really talking about a 3-nested RAID level, which is NOT very common to be seen (I've actually never heard of it...)

    @stevecs: it's too late for me to understand more than 90% of what you wrote, so I'll get back to you in a few hours, ok? I'm completely spent, and I have to get to work in 5 hours...

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  19. #69
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    @Miguel- Know what you mean by a long day. As for the multi-level raid reference on 1+0+0 or 1+0+1 et al. You are correct that it's not too common (especially for smaller environments (or more accurately smaller stripe widths which are normally deployed). I normally see this when you try to create very wide stripe widths. Raid HBA's usually have some limit to their widths, for Areca it's 32 drives wide. So if you have more than 32 drives and want to create a single volume you have to go to multi-level raids. With a lot of them this is a simple raid x+0. With raid-10 (1+0) this becomes a raid 1+0+0. But as you surmised it can be anything (doesn't need to be a 0 can be a 1 or actually anything though 1 is more common) which greatly increases availability, unfortunately you still have the problem of underlaying BER rates of the drives and cables which poses a different challenge. (there's never a free ride. )

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  20. #70
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    I'm still not fully awake, but I think I got it now. Or at least I'm closer to getting it right... lol

    First of all, it's good to know striped arrays aren't actually dependent on the cluster size. I'm guessing that means when I write a cluster to an array and that cluster is smaller than the stripe, the next one will sit in the same stripe, until it's full. Cool, less waste of space.

    So, after finding out what my block request is for reads AND writes (though reads will influence my array the most, so I'll most likely sway that way), I'll be able to start defining my stripe size.

    Here's were I'm a little stumped... How do I do those maths? For RAID-0, that's easy ("n" represents the number of disks in the array), "block size/n"; for RAID-1, as you said, that's not really an issue; RAID-5 (and the other parity RAID levels) is what's bugging me... I'm thinking something on the lines of "block size/(n-y)", "y" being the number of parity drives, since parity drives can't really be used for reading (if they are, you'll have a problem... lol), and writing to them is only done AFTER the stripe is written to. Am I thinking straight, or is it still too early in the morning ?

    Oh, one last question: does the fact of most (read: all) of the requests to/from the array being made through the SMB protocol influence on block size requests? Or should I stop making these kinds of questions and use the frigging spreadsheet?

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  21. #71
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Actually with parity raids (3/4/5/6) writes will have the biggest affect on performance as each write is actually 4:1 or 6:1 IOPS. Whereas a read is 1:1 (ie, reads are the most efficient and stripped arrays are close to the same performance as a raid-0 (raid3/4/5 = raid 0 of n-1 drives), raid 6 = raid0 of n-2 drives). That is unless you are running in degraded mode but that's an aberration as you should ONLY be in that state until you repair your array.

    It's mainly request block size where request is the number of sequential blocks (sectors) of your workload. This also assumes that your pattern is mainly read-based, If you have a good number of writes then non-parity raids are what you want to use. For getting your request sizes and the distribution of reads/writes plus the ratio of reads/writes just get diskmon and get the spreadsheet which has the calcs in it. http://www.xtremesystems.org/forums/...=150176&page=2

    You stripe size should be your average (or error on the larger size if on an edge ie, if you have say 80 blocks (40K) use 64K et al). As I mentioned before striping gets it's benefit when you have numerous requests. If you don't have enough requests to fill the queues then you won't see much of an improvement over a single drive as the issue is not the raid but the delay in the requests. (so in this case a faster rpm may help a little as it will cut down latency).

    As for SMB protocols et al, that really has nothing to do with the lower-level functions. A disk/raid/subsystem sees just block requests period. A level higher than that would be the filesystem which translates those blocks into file handles, then above that would be the OS and then you would have the application. (SMB would be closer to the OS level than an application level). But in any case it's removed from the subsystem by a step or two.

    If you are trying to model an overall system you need to take that into account (but you'll also take in file system efficiency, OS efficiency, network protocol stack implementation, SMB protocol implementation, application efficiency et al). From what you're saying of ~100Mbit performance you will be hard-pressed to have an issue at all unless you're using ~15 year old drives. Your bottleneck would be the network not the local drive subsystem.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  22. #72
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    I'm learning more and more each time I read one of your posts, stevecs.

    Thanks, man, you're being a great help.

    I'm already testing your spreadsheet on a non-related server, to test drive it, with interesting results.

    The first one is that, maybe because I'm using the Portuguese locale, there are weird number translations... My "seconds" column climbs exponentially starting with the first second, the numbers become xxx.xxx.xxx when copying from notepad to Excel, which translate into insane total second number... Any ideas on that?

    Then, from a manually-corrected spreadsheet and a 2-minute test, I can see I was making ~100K request (reads), and almost 95&#37; reads... hehehe Not bad, but that's not an usual load for that server... I'll have to refine it.

    Also, I have two questions about Diskmon:
    1) How can I get the data transfer type? I only get blanks on mine...
    2) Is there any easy way to filter the results for a single disk/array?

    Oh, and after re-reading post #67, I finally understood the block vs. stripe issue. I was tying up a knot in my head about that, i somewhat interpreted that "64" as "64k", and then 32k for the stripe size... hehehe Hence, that weird question about stripe size vs. Yes, I need some more sleep, I usually don't make that kind of mistake (well, even if I don, that's not something that can make you go to jail... lol).

    I think I'm getting the hang of it now. Nice What a cool way to start the weekend (heh, kind of, I've got to study a lot, so...)

    Again, thank you so much for all the help, stevecs. You've been great.

    Cheers.

    Miguel


    [OT]: Stevecs, what do you use ~22TB of storage for? That's an insane amount of HDD space... Do you run an on-line storage company at home? lol [/OT]
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  23. #73
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    I've never heard of a locale causing any problems like you described. When you run diskmon it should start up in capture mode automatically. It will look like this:

    0 0.016125 0.00045776 1 Write 131449491 540
    1 0.016125 0.00729561 0 Read 43070863 8
    2 0.016125 0.00729561 0 Write 43070887 8
    3 0.016125 0.00729561 0 Read 43070855 8
    4 0.016125 0.00729561 0 Write 43070879 8
    5 0.016125 0.00729561 0 Read 43070847 8
    6 0.016125 0.00729561 0 Write 43070871 8
    7 0.016125 0.00729561 0 Read 43070575 24
    8 0.016125 0.00729561 0 Write 43070639 24
    9 0.031750 0.00045776 1 Write 131450031 540
    10 0.031750 0.00729561 0 Read 43070567 8

    I then use the save as to a filename (make sure it's less than 65535 lines as that's the excel limit). I then open up the text file with excel to a new spreadsheet (space delimited, treat multiples as one). Then just copy those cells over to the calculation sheet. (you may need to extend the calculation in the 'type' field. I didn't carry it down to all 65535 cells as it would be a huge spreadsheet that I couldn't post here).

    As for data transfer type, that is calculated by the spreadsheet basically it's just looking at the transfer sectors and does the subtraction, if the blocks are next to each other (adjacent sectors) it's a sequential request, if they are not then it's random, pretty simple.

    As for filtering, yes, when you first pull in the data from diskmon to excel just sort it based on the disk # field and copy just those over. If you mean in the diskmon program itself, no it's a very simple program.

    No problem, it can be a lot to understand and with 'modern' push trying to get everything to look like a 'black box' to the consumer doesn't help matters. You _DO_ know there's a quiz later right? :P

    As for the smallish array here (and I say that as I'm always running out of space) it's mainly for all my media content. I do a lot of photography and some video work plus various testing/coding projects. With about 200MiB per slide (and 20-30,000 of those for 35mm, up to a GiB/each for medium format) and then all the DV video plus 1st stage backup of my laptop/desktop box before it gets moved to tape, then software storage (can never find a CD when I want it so I just put everything on there). It fills up fast.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  24. #74
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Ok, it really was a locale setting messing up the thing... Here in Portugal we have different rules for digit grouping (we actually use "." as a digit grouping symbol, and "," for decimal separator). You probably never had an issue because in the states it's the other way around...

    My settings were conflicting with the data Diskmon supplies (which uses "." as decimal separator, completely ignored by Excel... Unless this was an Excel bug I just discovered, since sub-second entries were ported OK... hehehe).

    I found out the RAN/SEQ formula. Copying from Notepad deletes that column data, I had to create a temporary column for data copy. It's fine now.

    I also remembered how you filter data on Excel... It had been a while, you don't use that often on law professions... lol

    As for the quiz, I'm actually looking forward to it... hehehe I don't like "black boxes", only if I made them, and even then, only if I can tinker with them... hehehe

    Finally, we'll just say we have different meanings for the adjective "small"... I could fit ALL my data (from all my HDDs, CDs, DVDs, FDDs, AND pen drives) and still come (waaaay) short of 10TB... I think I'd drool if I just saw your Server system, let alone your IT network... (That would be like instant-orgasm, and I know someone who wouldn't like that not even a tiny bit... lol)

    Now, back to study. I have to ace your quiz...

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  25. #75
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Heh, I must be a lucky guy... Honestly.

    I mean, "upper management" (read: parents) approved a €300+ budget for HDDs for that NAS just eight hours ago...

    And it also aproved a €100+ budget for "other NAS-related expenses" (read: mobo, CPU, RAM, PSU, in short, everything else... lol).

    My chin almost dropped to the floor that moment. I'll actually have to start drawing the thing (case builder here), and planning everything out.

    So, what do you say about 3*501LJ, or 3*WD 500GB GP drives? The Samsung ones usually are darn cheap (321KJ is now sub-€70, and the 501LJ is actually sub-€90; 750GB drives are usually on the €125~€150 range), but I will sleep in the same room as the NAS, and you can't beat the Green Power noise... Unless I can program the NAS to suspend from say 12pm to 7am...

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

Page 3 of 8 FirstFirst 123456 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •