MMM
Page 1 of 2 12 LastLast
Results 1 to 25 of 39

Thread: Ideal stripe size for SSD RAID 0 running OS and apps?

  1. #1
    Xtreme Mentor
    Join Date
    Sep 2006
    Posts
    3,246

    Ideal stripe size for SSD RAID 0 running OS and apps?

    Dual SSD RAID 0 set-up running OS and apps. I assume the smallest stripe that you can get is optimal with SSD. Does anyone disagree?
    [SIGPIC][/SIGPIC]

  2. #2
    I am Xtreme
    Join Date
    Jan 2006
    Location
    Australia! :)
    Posts
    6,096
    sorry, bit OfT: are those 7 VRs attached to the Areca? If so, do they play nice together?

    cheers!
    DNA = Design Not Accident
    DNA = Darwin Not Accurate

    heatware / ebay
    HARDWARE I only own Xeons, Extreme Editions & Lian Li's
    https://prism-break.org/

  3. #3
    Xtreme Mentor
    Join Date
    Sep 2006
    Posts
    3,246
    Quote Originally Posted by tiro_uspsss View Post
    sorry, bit OfT: are those 7 VRs attached to the Areca? If so, do they play nice together?

    cheers!
    Yes, they are running off the Areca. When I get home from work tonight I'll give you a detailed discussion with some benches.
    [SIGPIC][/SIGPIC]

  4. #4
    I am Xtreme
    Join Date
    Jan 2006
    Location
    Australia! :)
    Posts
    6,096
    Quote Originally Posted by Speederlander View Post
    Yes, they are running off the Areca. When I get home from work tonight I'll give you a detailed discussion with some benches.
    sweeeeet! look forward to it!
    DNA = Design Not Accident
    DNA = Darwin Not Accurate

    heatware / ebay
    HARDWARE I only own Xeons, Extreme Editions & Lian Li's
    https://prism-break.org/

  5. #5
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,569
    64K worked best for me.

    Anything higher had no real dif., and lower was worse performance.
    At least with my setup.

  6. #6
    Xtreme Member
    Join Date
    Nov 2007
    Location
    Colorado, USA
    Posts
    101
    Quote Originally Posted by Speederlander View Post
    Dual SSD RAID 0 set-up running OS and apps. I assume the smallest stripe that you can get is optimal with SSD. Does anyone disagree?
    I just installed 3x Samsung SSD RAID 0 w/onboard nVidia controller and I'm wondering the same thing.

    IIRC, SSD has more trouble with small writes. I would guess that a medium (64Kb or 128Kb) to large (256 or 512) stripe size would be optimal, perhaps even greater.

    The problem of course is that this whole system was designed to overcome mechanical hard drive limitations, and in our case all it does is increase available bandwidth. Basically we're just muxing a bunch of memory on a SATA controller instead of the system board.

    I think very soon we will see a paradigm shift in the way that data is stored. The whole concept of a 'disk' now seems a bit more vague and unnecessary with available memory that's soon going to approach DRAM speeds ...

    Just thinking out loud

    I was going to do some quick tests tonight but man that crap takes so much time ...

    In the meantime here are a couple links for you to peruse:

    StorageReview RAID Level 0 Overview

    Stripe Width and Stripe Size

    Please let us know the results of your testing
    Last edited by Sunayknits; 08-20-2008 at 06:18 PM.
    My Latest Project: SXbox

    • TT Mozart TX Case
    • Samsung 32Gb SSD * 3 (RAID 0)
    • Raptor 150GB x 2 (RAID 0)
    • 2x 8800GTX 768MB 640/1040Mhz
    • Core 2 Duo E6420 Conroe 2.13/3.00Ghz
    • EK + Apogee GTX
    • ASUS P5N32-E SLI Plus
    • 4GB 1066 Mem
    • Silverstone DA850
    • 2x MCP350
    • 2x Magicool Extreme Slim Profile Rads

    + Many Many Hours

  7. #7
    Memory Addict
    Join Date
    Aug 2002
    Location
    Brisbane, Australia
    Posts
    11,651
    ---

  8. #8
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,569
    Quote Originally Posted by eva2000 View Post
    Thanks for that link and the test with the Raids Eva ! very interesting so say the least.

    Not sure but am I completly missing the picture here. Forget for a moment that my Raid is 8x SSD's, lets say it's 4x SSD's. I also used 64k stripes, larger was no benifit, smaller and performance was worse.

    Look at this graph, these is no spiking, nice smooth graphs.


    Head over to DVNation and look at his benchs.
    http://www.dvnation.com/benchmarks.html

    I just see huge problems with these new SSD's and these graphs I keep seeing.

    Keep in mind that I would love to see these newer SSD's perform well, I would jump on them in a heart beat. But to be honest I just don't see the performance in the new ones.

  9. #9
    Memory Addict
    Join Date
    Aug 2002
    Location
    Brisbane, Australia
    Posts
    11,651
    yeah well with your SSD = SLC based price per GB is much higher so is the performance comapred to MLC SSDs
    ---

  10. #10
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,569
    Quote Originally Posted by eva2000 View Post
    yeah well with your SSD = SLC based price per GB is much higher so is the performance comapred to MLC SSDs
    Thats a great point and I will not disagree with that.

    Its just to me the new MLC SSD's don't seem to stack up to the older SLC models performance wise, which btw the price for those has come way down.

  11. #11
    Xtreme Mentor
    Join Date
    May 2005
    Location
    Westlake Village, West Hills
    Posts
    3,046
    Wow almost 900 Megs of read and .1ms access time, I am so jealous.. Though I bet your cpu is a huge bottleneck with hard drive performance like that haha.
    PC Lab Qmicra V2 Case SFFi7 950 4.4GHz 200 x 22 1.36 volts
    Cooled by Swiftech GTZ - CPX-Pro - MCR420+MCR320+MCR220 | Completely Silent loads at 62c
    GTX 470 EVGA SuperClocked Plain stock
    12 Gigs OCZ Reaper DDR3 1600MHz) 8-8-8-24
    ASUS Rampage Gene II |Four OCZ Vertex 2 in RAID-0(60Gig x 4) | WD 2000Gig Storage


    Theater ::: Panasonic G20 50" Plasma | Onkyo SC5508 Processor | Emotiva XPA-5 and XPA-2 | CSi A6 Center| 2 x Polk RTi A9 Front Towers| 2 x Klipsch RW-12d
    Lian-LI HTPC | Panasonic Blu Ray 655k| APC AV J10BLK Conditioner |

  12. #12
    Xtreme Addict
    Join Date
    Mar 2008
    Posts
    1,163
    Yes, small writes are an issue. Basically SSDs have to write whole erase blocks, not their parts. Erase block has usually 2MB. Sometimes 8 and I even heard a suggestion that one drive might have 16.
    (I don't remember how many drives do you have, for this post I assume 9 in RAID 5).
    When you have 64k stripe and a 0.5 MB file - it gets stripped into all drives. And you write total of 18 MB with performance of a single drive. You get performance increase when file size exceeds 2 MB - all drives write just one block, instead of one drive writing 2.If I get it correctly (which is not that sure), write performance should be about the same all the way up to stripe size=erase block size. Actually should get better slightly because controller has simpler job. Life expectancy would be the best in this case too.
    You'll loose read performance though.

  13. #13
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,569
    Quote Originally Posted by Nanometer View Post
    Wow almost 900 Megs of read and .1ms access time, I am so jealous.. Though I bet your cpu is a huge bottleneck with hard drive performance like that haha.
    Actually it's not a real problem for the CPU

    This is Raid 5 running here


    Quote Originally Posted by m^2 View Post
    Yes, small writes are an issue. Basically SSDs have to write whole erase blocks, not their parts. Erase block has usually 2MB. Sometimes 8 and I even heard a suggestion that one drive might have 16.
    (I don't remember how many drives do you have, for this post I assume 9 in RAID 5).
    When you have 64k stripe and a 0.5 MB file - it gets stripped into all drives. And you write total of 18 MB with performance of a single drive. You get performance increase when file size exceeds 2 MB - all drives write just one block, instead of one drive writing 2.If I get it correctly (which is not that sure), write performance should be about the same all the way up to stripe size=erase block size. Actually should get better slightly because controller has simpler job. Life expectancy would be the best in this case too.
    You'll loose read performance though.
    Yes that sounds very good, nice job. Depending on Raid size and the controller used in the SSD, number of drives etc the stripe size may vary, like tunning your SSD Raid

  14. #14
    Xtreme Addict
    Join Date
    Mar 2008
    Posts
    1,163
    What I wanted to tell is that when it comes to writes - the optimal stripe size would not depend on number of drives, RAID level, usage patterns, but SSD construction only.

  15. #15
    Xtreme Member
    Join Date
    Nov 2007
    Location
    Colorado, USA
    Posts
    101
    I did some testing last night to try and determine optimal stripe size for my 3x Samsung 32Gb (Model MCBQE32G5MPP-0VA00) SSD array.

    Using ASUS P5N32-E SLI Plus w/onboard nVRAID, I tried all stripe sizes available. I re-imaged the array with a basic XP SP3 install each time, gave the system a few minutes to settle, and ran the tests.

    I'm not going to attempt an in-depth analysis of these results because frankly there are just too many factors I don't understand.

    Obviously these controllers are optimized for hard drives, and use specific methods to deal with things like rotational and access latency. For example, IMHO NCQ (Native Command Queing) is absolutely useless for an SSD and could even be causing problems with this new paradigm. NCQ is meant to reduce problems caused by slow access times on hard drives and doesn't make sense at all with SSD.

    I'm using 8Kb stripe size right now and it seems to be performing very well. I don't use this box for anything but gaming so it will take some time before I can say how it performs with day-to-day tasks ...

    I chose 8Kb because it has the highest avg. read, plus the highest CPU util (depending on which benchmark program you look at lol). The higher CPU util leads me to believe more data is being fetched at a faster rate from the on-board controller. Plus it just felt faster as I was using it.

    I have since installed HL:Episode 2 and the load time between levels which was annoyingly long on my Raptors before, is now about a third of what it was. Whether this is worth $1200 I'm still not sure, but being on the bleeding edge is what Xtreme is all about right?

    Enough ranting, here are my results:
    *Note that these are READ tests only, writes could be an entirely different story ...



    4Kb Stripe Size




    8Kb Stripe Size




    16Kb Stripe Size




    32Kb Stripe Size




    64Kb Stripe Size




    128Kb Stripe Size




    "Optimal" Stripe Size
    My Latest Project: SXbox

    • TT Mozart TX Case
    • Samsung 32Gb SSD * 3 (RAID 0)
    • Raptor 150GB x 2 (RAID 0)
    • 2x 8800GTX 768MB 640/1040Mhz
    • Core 2 Duo E6420 Conroe 2.13/3.00Ghz
    • EK + Apogee GTX
    • ASUS P5N32-E SLI Plus
    • 4GB 1066 Mem
    • Silverstone DA850
    • 2x MCP350
    • 2x Magicool Extreme Slim Profile Rads

    + Many Many Hours

  16. #16
    Xtreme Addict
    Join Date
    Mar 2008
    Posts
    1,163
    Spiky
    What happens at 16 GB???
    Do you know how these benchmarks test drives? Because I wouldn't be surprised if optimal stripe size for them wouldn't be really that good in real world apps.
    Last edited by m^2; 08-21-2008 at 10:47 PM.

  17. #17
    Xtreme Member
    Join Date
    Nov 2007
    Location
    Colorado, USA
    Posts
    101
    Quote Originally Posted by m^2 View Post
    Spiky
    What happens at 16 GB???
    Do you know how these benchmarks test drives? Because I wouldn't be surprised if optimal stripe size for them wouldn't be really that good in real world apps.
    Yeah part of the problem is using the onboard controller, which is the sux compared to an Areca, HighPoint or 3Ware ... Alas I would have to replace my SB waterblock in order to use an add-in card (and I'm starting to think this may be worth the effort)

    Also, I'm running the bench from the drives being benched, so I suspect that's where the big spike at 16Gb comes from.

    Another couple of unknowns:

    a) These benches are meant for hard drives and may actually be poor indicators of real-world SSD performance, as you mentioned.

    b) The stripe width (# of drives in the array) may affect performance significantly. I'm guessing a power of 2 is probably better, but I have no facts to back this up ... I'm tempted to buy another SSD, along with an add-in card, but damn this hobby is expensive!
    My Latest Project: SXbox

    • TT Mozart TX Case
    • Samsung 32Gb SSD * 3 (RAID 0)
    • Raptor 150GB x 2 (RAID 0)
    • 2x 8800GTX 768MB 640/1040Mhz
    • Core 2 Duo E6420 Conroe 2.13/3.00Ghz
    • EK + Apogee GTX
    • ASUS P5N32-E SLI Plus
    • 4GB 1066 Mem
    • Silverstone DA850
    • 2x MCP350
    • 2x Magicool Extreme Slim Profile Rads

    + Many Many Hours

  18. #18
    Xtreme Addict
    Join Date
    Mar 2008
    Posts
    1,163
    Quote Originally Posted by Sunayknits View Post
    b) The stripe width (# of drives in the array) may affect performance significantly. I'm guessing a power of 2 is probably better, but I have no facts to back this up ... I'm tempted to buy another SSD, along with an add-in card, but damn this hobby is expensive!
    I don't think so, this shouldn't make a significant difference unless you have cluster size bigger than stripe size. Maybe fits some controller's structures better...but anyway, the bigger the better.

  19. #19
    Xtreme Cruncher
    Join Date
    Nov 2002
    Location
    Belgium
    Posts
    605
    Everyone seems to use different stripesizes .

    I am wondering which is the best combination possible for stripesize as well as clustersize on a raid 0 SSD environment . (In my case 2 160gb intel SSD G2)
    There must be a "best" value somewhere no ?


    Main rig 1: Corsair Carbide 400R 4x120mm Papst 4412GL - 1x120mm Noctua NF-12P -!- PC Power&Cooling Silencer MK III 750W Semi-Passive PSU -!- Gigabyte Z97X-UD5H -!- Intel i7 4790K -!- Swiftech H220 pull 2x Papst 4412 F/2GP -!- 4x4gb Crucial Ballistix Tactical 1866Mhz CAS9 1.5V (D9PFJ) -!- 1Tb Samsung 840 EVO SSD -!- AMD RX 480 to come -!- Windows 10 pro x64 -!- Samsung S27A850D 27" + Samsung 2443BW 24" -!- Sennheiser HD590 -!- Logitech G19 -!- Microsoft Sidewinder Mouse -!- Fragpedal -!- Eaton Ellipse MAX 1500 UPS .





  20. #20
    Xtreme Addict
    Join Date
    Oct 2005
    Location
    England, Northwest
    Posts
    1,219
    I'm using 128K with two 80GB G2s on a Highpoint RR3520.

    Boot-up time isn't as amazing as some user's on here but I suspect that's the use of the RAID card over on-board controllers. (Which I'm using more for reliability.)

    While it probably doesn't net the very best performance, I have to point out that it doesn't really matter that much. They are still lightening fast and I can still virus scan my C drive in 2 minutes 20 seconds. Also shut-down time is very impressive.

    In short, I'm sure you'll be happy with whatever you go for.

  21. #21
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    Hong Kong
    Posts
    1,905
    From what it seems, on a single SSD 32/64 look like the best stripe sizes, and on RAID, 64/128.

    I'm using 64 on a single vertex 60 and am getting subpar sequetial write/reads (at 190/95 respectively) as opposed tot he 230/130 figures advertised on the box or whatever. XP 64 XP2.
    -


    "Language cuts the grooves in which our thoughts must move" | Frank Herbert, The Santaroga Barrier
    2600K | GTX 580 SLI | Asus MIV Gene-Z | 16GB @ 1600 | Silverstone Strider 1200W Gold | Crucial C300 64 | Crucial M4 64 | Intel X25-M 160 G2 | OCZ Vertex 60 | Hitachi 2TB | WD 320

  22. #22
    Xtreme Cruncher
    Join Date
    Nov 2002
    Location
    Belgium
    Posts
    605
    Yes it seems indeed that most people recommend a 64k stripesize and leave clustersize at default which is 4k if I'm not mistaking .

    I will go with those sizes and do a fresh win7 install this evening .


    Main rig 1: Corsair Carbide 400R 4x120mm Papst 4412GL - 1x120mm Noctua NF-12P -!- PC Power&Cooling Silencer MK III 750W Semi-Passive PSU -!- Gigabyte Z97X-UD5H -!- Intel i7 4790K -!- Swiftech H220 pull 2x Papst 4412 F/2GP -!- 4x4gb Crucial Ballistix Tactical 1866Mhz CAS9 1.5V (D9PFJ) -!- 1Tb Samsung 840 EVO SSD -!- AMD RX 480 to come -!- Windows 10 pro x64 -!- Samsung S27A850D 27" + Samsung 2443BW 24" -!- Sennheiser HD590 -!- Logitech G19 -!- Microsoft Sidewinder Mouse -!- Fragpedal -!- Eaton Ellipse MAX 1500 UPS .





  23. #23
    I am Xtreme
    Join Date
    Oct 2005
    Location
    Grande Prairie, AB, CAN
    Posts
    6,140
    Quote Originally Posted by CedricFP View Post
    From what it seems, on a single SSD 32/64 look like the best stripe sizes, and on RAID, 64/128.

    I'm using 64 on a single vertex 60 and am getting subpar sequetial write/reads (at 190/95 respectively) as opposed tot he 230/130 figures advertised on the box or whatever. XP 64 XP2.
    You don't use stripe size w/ a single. Stripe size is for only for RAID.
    Quote Originally Posted by CrimInalA View Post
    Yes it seems indeed that most people recommend a 64k stripesize and leave clustersize at default which is 4k if I'm not mistaking .

    I will go with those sizes and do a fresh win7 install this evening .
    128k or larger is what you want. I would go in to it but I am lazy But the short version is that the erase block page is generally 64k. So everytime a write is done, 64k needs to be written to the drive. Now multiple that by two since you have 2 drives and you have 128k.

  24. #24
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    Hong Kong
    Posts
    1,905
    Quote Originally Posted by lowfat View Post
    You don't use stripe size w/ a single. Stripe size is for only for RAID.

    i meant alignment but i thought ultimately they ended up the same thing.
    -


    "Language cuts the grooves in which our thoughts must move" | Frank Herbert, The Santaroga Barrier
    2600K | GTX 580 SLI | Asus MIV Gene-Z | 16GB @ 1600 | Silverstone Strider 1200W Gold | Crucial C300 64 | Crucial M4 64 | Intel X25-M 160 G2 | OCZ Vertex 60 | Hitachi 2TB | WD 320

  25. #25
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    i would highly recommend as large as you can get..the bigger the better, i use 1MB

Page 1 of 2 12 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •