MMM
Results 1 to 24 of 24

Thread: Best controller for a system with eight SSDs?

Threaded View

  1. #8
    Xtreme Member
    Join Date
    Aug 2009
    Location
    Nelson, New Zealand
    Posts
    367
    Quote Originally Posted by stevecs View Post
    If your queue depths are going to be small then there's not much you can really do as you're going to be limited by the speed of a single SSD or single device.
    My workload is a mix. Sometimes, my queue depths are low -- which is why minimizing latency is near the top of my list. But at other times it might get up to 20 or so, depending on what I'm doing. I would expect queue depths to drop with a much faster disk subsystem.

    Based on some monitoring I've done, I believe that one of the main areas I need to optimize is OS-oriented functions: things like paging, application start-up, DLL loading, processing temporary files, I/O for services, anti-virus, etc. Another focus is everything related to Visual Studio, which also loads lots of DLLs/plug-ins, plus running the compiler, handling temp files, etc. SQL Server and IIS are third priority -- although SQL Server is easy in some ways because it's completely configurable (it will have a dedicated log drive, for example).

    Quote Originally Posted by stevecs View Post
    If that's what you're doing then you may want to look at ram based solutions (acard, fusion i/o, et al) put your os on an ssd and then move what you're doing over to the ram and use that.
    Since RAM devices lose everything if they lose power, it would mean frequently copying everything back-and-forth to more durable storage, which isn't workable in my environment.

    The Fusion IoDrive is a possibility, although at about $3300 for 80GB the cost is about 3x compared to the X25-E on a per GB basis, if you include the controllers (and the -E is already outrageous). I'm hoping there's a better way.

    Quote Originally Posted by stevecs View Post
    Otherwise change your process to increase concurancy/queue depth. Multiple controllers are not an issue (and are the solution actually) for high performance. You spread your discs across them and then use the OS to create a striped volume (dynamic disc, or if $$ VxFS from veritas in windows).
    The issue there is, AFAIK, software striped volumes aren't bootable -- so that would help with my application stuff, but not with the OS stuff. With only 8 drives and a requirement for redundancy, does splitting them onto three controllers (one for OS and two for apps) really make sense?

    Quote Originally Posted by stevecs View Post
    Thing is your list (random i/o small block and sequential large block) are antithesis of each other. If you try to do both you'll not do either well.
    Which is why they're prioritized. Small block is more important for the SSD subsystem. If that means completely giving up large block perf, that's fine.

    Quote Originally Posted by stevecs View Post
    For large storage (you mention raid 6 & 10TB) you want for max iops to create multi-layered raids (i.e. the more raid-6's you have and striped the better for overall performance as it increases your write throughput which would otherwise be killed by the 6 for 1 penalty (3 reads+3 writes for ever write for less than a full stripe width). Would recommend small drives (146/300GB) and bunch them into several 6D+2P raid groups that you merge into one volume space. Assuming your goals apply to that as well.
    The goals don't apply to the large storage subsystem.

    The RAID-6 array will primarily be used for secondary storage and archive, where performance isn't as critical. Actually, large block perf is probably more important there than small block, and latency can be higher. Reliability is really the main metric there (I may go with RAID-60; that part is still TBD).

    Quote Originally Posted by stevecs View Post
    What is your budget and what is your base performance goal you are looking for (since you mentioned DB work that's going to be pretty random access).
    I currently have a rough budget for the SSD side of things of appx $400 each times eight drives, plus $750 each for two controllers, so $5K or so. On the magnetic side, appx 13 1TB drives at $150 each, plus $1K for the controller, so about $3K or a little more if you include chassis components (this needs to all fit in a single tower, so it's a bit of a squeeze). Tentative config for the magnetic drives: 1 hot spare, 2 in RAID-1 for DB log, remainder as either 8+2 RAID-6 or possibly as two 3+2 RAID-60 (with two or maybe three partitions, including one short-stroked for best perf).

    The X25-E is speced at 35,000 IOPS for random 4K reads. I'm hoping with a RAID-5 array that I might hit close to 200,000 IOPS. SQL Server actually reads and writes 8K pages and 64KB extents, but I would still be inclined to push the optimization toward the smaller side.

    IIRC, the X25-E is internally striped in 4K blocks -- which means a 4K strip size would be a reasonable minimum. In a 7+1 RAID-5 config, that would be a 28K stripe. Possible alternatives: a 4+1 RAID-5 (16K stripe) and a 2+1 RAID-5 (8K stripe), or two 2+1 RAID-5 and one RAID-1 mirror.
    Last edited by AceNZ; 01-26-2010 at 05:06 AM.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •