Quote Originally Posted by stevecs View Post
I am of the opinion that a pagefile is useless if you have the money for RAM
Good point.

Quote Originally Posted by stevecs View Post
As for striped volumes and booting, that's solved by using a raid-1 or 10 for your OS raidset.
So, an array that's mirrored across multiple controllers is bootable? Does it just boot from half of the array?

Quote Originally Posted by stevecs View Post
Two SSD's (one per controller) merged into a mirror at the OS level (dynamic disc) for your OS. split the remaining 6 drives per controller into two raid 0's and mirror at the OS (RAID0+1).
If I need to give up half of the SSDs for data reliability, then I will probably need to move to the X25-M instead of the -E, in order to have enough capacity. Of course, that costs both in terms of write performance and device life / MTBF. At least the hardware is a bit cheaper (80GB version).

This also raises the question of whether I should wait a month (maybe more) for the upcoming 6gbps SSDs, such as the C300.

Quote Originally Posted by stevecs View Post
The 13 drives (would really try and get SAS drives here as it's bi-directional unlike sata and have much greater command queue depth/reordering) spit into two raid-5's of 6 drives each one per controller. set up a raid-0 in the OS for the two luns (raid 5+0). The last drive would be a cold spare in case of failure of one of the 12. This would give you ~9TiB of space and about 790read iops/152 write iops (assuming 78/73 for a single drive (7200rpm/sata).
The only affordable SAS option I'm aware of is the Seagate ES.2. They are about a 50% price premium per GB compared to the WD RE3. To stay in my budget, I would therefore have to decrease the number of drives from 13 to 9 or so. Is the improvement from bi-directionality and NCQ really worth that much?

Do you know if controller latency is any better with the SAS drives? The SAS controllers usually emulate SATA in software, which can add to latency (hence the advantage of the 1231ML, since it's a native SATA controller). Although, I vaguely recall that the ES.2 does a SAS to SATA conversion on the drive side, so maybe it's a wash?

I would love to use 10^16 BER / 1.6M hour MTBF SAS drives (the 2.5-inch versions would be ideal), but the 6 to 8x higher cost per GB makes them prohibitive for this application.

Quote Originally Posted by stevecs View Post
Now if load balancing wasn't a main issue (i.e. not high loads) and wanting to offload any dynamic disc functions in windows would probably put the raid 5's (still do a raid-50 of 12 drives) on one controller with the two SSD's in raid-1 for your OS and the other would have the 6 ssd's in raid-10, bump the cache on both cards to 2GiB to aid in writing the dirty data out.
Would you still use RAID-50 with 10^15 BER drives like the RE3? Or would RAID-60 be better?

OK, so now we can get back to the question in the OP: which controllers would be best here?

Quote Originally Posted by stevecs View Post
Then from those luns above create partitions for each of your file systems (i.e. create a file system for each function above db, trans log, app, games, et al). This allows you to have a different cluster table for each so you don't create a bottleneck there as it's single threaded per file system.
Sounds right.

Quote Originally Posted by Levish View Post
it would be a matter of setting up your backups which when implementing a server you have to do anyway. And since its a server its assumed the device only gets turned off or restarted for planned maintenance.
It's not a server. This will be a desktop machine, running Win 7 Ultimate.

Quote Originally Posted by Levish View Post
- vss for enabled on the Ram Disk and checking hourly and the storage used for vss set to normal disks or ssd's (this wouldn't work well for DB's).
VSS doesn't work in Win 7, does it?

Quote Originally Posted by Levish View Post
ntbackup full daily, differential hourly of the ram disk the differentials would overwrite so you'd only have the most recent diff and the full. Then backup again to your Tape/CD/DVD whatever for archival (this would work for DB's just setup backups for the DB and transaction logs).
Certainly an option; it's just messy -- plus, of course, the system performance drops through the floor while the backups are in progress.

The other issue is cost, which is considerably higher than an SLC SSD, on a per-GB basis.

Even so, I am (slightly) tempted by the IoDrive. It's not RAM, so it doesn't have the volatility issues of the Acard. But the cost is still crazy: $3K for just 80GB, and you can't use it as a boot device. The plus side is high random 4K IOPS and low latency -- although the benches I've seen don't seem to be nearly as good as the specs.