My workload is a mix. Sometimes, my queue depths are low -- which is why minimizing latency is near the top of my list. But at other times it might get up to 20 or so, depending on what I'm doing. I would expect queue depths to drop with a much faster disk subsystem.
Based on some monitoring I've done, I believe that one of the main areas I need to optimize is OS-oriented functions: things like paging, application start-up, DLL loading, processing temporary files, I/O for services, anti-virus, etc. Another focus is everything related to Visual Studio, which also loads lots of DLLs/plug-ins, plus running the compiler, handling temp files, etc. SQL Server and IIS are third priority -- although SQL Server is easy in some ways because it's completely configurable (it will have a dedicated log drive, for example).
Since RAM devices lose everything if they lose power, it would mean frequently copying everything back-and-forth to more durable storage, which isn't workable in my environment.
The Fusion IoDrive is a possibility, although at about $3300 for 80GB the cost is about 3x compared to the X25-E on a per GB basis, if you include the controllers (and the -E is already outrageous). I'm hoping there's a better way.
The issue there is, AFAIK, software striped volumes aren't bootable -- so that would help with my application stuff, but not with the OS stuff. With only 8 drives and a requirement for redundancy, does splitting them onto three controllers (one for OS and two for apps) really make sense?
Which is why they're prioritized. Small block is more important for the SSD subsystem. If that means completely giving up large block perf, that's fine.
The goals don't apply to the large storage subsystem.
The RAID-6 array will primarily be used for secondary storage and archive, where performance isn't as critical. Actually, large block perf is probably more important there than small block, and latency can be higher. Reliability is really the main metric there (I may go with RAID-60; that part is still TBD).
I currently have a rough budget for the SSD side of things of appx $400 each times eight drives, plus $750 each for two controllers, so $5K or so. On the magnetic side, appx 13 1TB drives at $150 each, plus $1K for the controller, so about $3K or a little more if you include chassis components (this needs to all fit in a single tower, so it's a bit of a squeeze). Tentative config for the magnetic drives: 1 hot spare, 2 in RAID-1 for DB log, remainder as either 8+2 RAID-6 or possibly as two 3+2 RAID-60 (with two or maybe three partitions, including one short-stroked for best perf).
The X25-E is speced at 35,000 IOPS for random 4K reads. I'm hoping with a RAID-5 array that I might hit close to 200,000 IOPS. SQL Server actually reads and writes 8K pages and 64KB extents, but I would still be inclined to push the optimization toward the smaller side.
IIRC, the X25-E is internally striped in 4K blocks -- which means a 4K strip size would be a reasonable minimum. In a 7+1 RAID-5 config, that would be a 28K stripe. Possible alternatives: a 4+1 RAID-5 (16K stripe) and a 2+1 RAID-5 (8K stripe), or two 2+1 RAID-5 and one RAID-1 mirror.




Reply With Quote

Bookmarks