Ok, what you're describing really needs multiple raid sets, don't create a 'catch all' from all the drives as you don't have enough to handle the varying types of loads. From what you're describing I can see the following raidsets: OS, Application, Scratch, Database, DB Transaction log, Games, Long term storage. Your QD will be large only on a couple of items (probably the scratch and database ones; but small on the others) I am of the opinion that a pagefile is useless if you have the money for RAM (in over 30 years I have yet to find an application that -needed- a pagefile/swapfile. Not saying that they don't exist but haven't run across one) that would dramatically cut down on your finite IOPS and buss bandwidth. Keep it in memory).
As for striped volumes and booting, that's solved by using a raid-1 or 10 for your OS raidset. You use other raidsets for the other functions which can be striped or not. Splitting across multiple controllers does make sense IF you are running into a controller; buss; drive et al bottleneck. Remember that the Areca's (and most other similar architecture cards, (adaptec, et al) have 256 command limit to the card. Say even if you /CAN/ get 32 commands per sata drive (unlike the 128-256/sas drive) that's 8 drives and if you reach that you don't give the card any time to re-order and forward them so your service time will increase to the array. This along with the IOPS limits it's generally better to have subsystems below their spiral point. For sata my rule of thumb is below 40% utilization.
The X25-E has a read at 35,000 at 4KiB but a write at 3,300 at 4KiB so 80/20 that would be ~27000 IOPS or ~108MiB/s per drive. With a RAID-5 array you will be lucky to get 3300-4000 IOPS. Remember random writes with parity raids are limited by a single drive. You need a multi-level raid to increase them (raid-50 or 10 et al). The cache on your controller will mask this (as it's buffered) but you will never have enough cache for the entire subsystem (if you did, why have the subsystem?). I would leave your parity raids for your long term storage, games, application partitions. Scratch would probably be a stripe (raid-0) assuming it really IS scratch and you want that for speed. OS would be raid 1 or 10 as well as your DB/transaction log partitions.
Since you are looking at a small budget and constraints of a single tower you have to make concessions. If you only have 13 1TB drives, and 8 SSD's total, I would probably do the following:
(most equal load balance assuming you want to push real high QD's) Two SSD's (one per controller) merged into a mirror at the OS level (dynamic disc) for your OS. split the remaining 6 drives per controller into two raid 0's and mirror at the OS (RAID0+1). The 13 drives (would really try and get SAS drives here as it's bi-directional unlike sata and have much greater command queue depth/reordering) spit into two raid-5's of 6 drives each one per controller. set up a raid-0 in the OS for the two luns (raid 5+0). The last drive would be a cold spare in case of failure of one of the 12. This would give you ~9TiB of space and about 790read iops/152 write iops (assuming 78/73 for a single drive (7200rpm/sata).
Now if load balancing wasn't a main issue (i.e. not high loads) and wanting to offload any dynamic disc functions in windows would probably put the raid 5's (still do a raid-50 of 12 drives) on one controller with the two SSD's in raid-1 for your OS and the other would have the 6 ssd's in raid-10, bump the cache on both cards to 2GiB to aid in writing the dirty data out.
Then from those luns above create partitions for each of your file systems (i.e. create a file system for each function above db, trans log, app, games, et al). This allows you to have a different cluster table for each so you don't create a bottleneck there as it's single threaded per file system.
Bookmarks