Page 1 of 3 123 LastLast
Results 1 to 25 of 63

Thread: Stripe size for SSD RAID0?

  1. #1
    Xtreme Addict
    Join Date
    Jul 2006
    Location
    Vancouver, BC
    Posts
    2,061

    Stripe size for SSD RAID0?

    What stripe size are you guys running for your SSD RAID0 arrays? Why did you pick what you did? Did you evaluate or bench a variety of stripe sizes?

    The reason I ask is that I recall that 64K was a sweet spot stripe size balancing parallelism with seek time. It would seem to me that with SSD's having virtually no seek time, you could run small stripe sizes without any negative effects. This would improve the parallelism as even small chunks of data and files would benefit from the striping.

    Thoughts?

  2. #2
    I am Xtreme
    Join Date
    Oct 2005
    Location
    Grande Prairie, AB, CAN
    Posts
    6,140
    128k or 256k seem to be the favourite around here.

  3. #3
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    32K would make best sense as the majority of OS reads/writes are 64K in size (cache manager page size)
    Sustained reads/writes won't have any major impact from the stripe size. On the other hand, if you take random reads/writes into account - you want to try and split it, since you get no noticable seek addition from the striping - and since most reads/writes are @64K, the logical stripe is 32K.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  4. #4
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    u would want 4k*number of drives*2 (and round up to the next lvl if u fall in the middle) so everything stays aligned so u would want 16k 3/4 32k. or atleast that what i would do for a SSD, for a normal drive however big u can make it

    also note the vertex has 2 drives in 2 so 2 vertex are 4 so 32k stripe
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  5. #5
    Xtreme Addict
    Join Date
    Jul 2006
    Location
    Vancouver, BC
    Posts
    2,061
    It all makes sense, but I don't see any negative effect of running 16K stripes or even smaller if it was an option on an SSD RAID0 array. The smaller the stripe, the better on SSD's. I can't see a valid argument to the contrary. Prove me wrong though.

  6. #6
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    For an OS partition smaller stripe sizes (~8K) performs the best on traditional media as it matches your request size and you can have multiple spindles working on different requests. With SSD's however, due to their blocking larger stripe sizes perform best (128K or larger). At least that has been what we've found so far in testing. Since blocking/# pages per block et al are different for different manufacturers and different models in different lines you should always test for actual results.

    for local results here it was touched on in http://www.xtremesystems.org/forums/...=219957&page=2 post #45 though in a more technical manor.
    Last edited by stevecs; 05-09-2009 at 04:06 AM.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  7. #7
    Worlds Fastest F5
    Join Date
    Aug 2006
    Location
    Room 101, Ministry of Truth
    Posts
    1,615
    Running 128k on both my arrays here... working great for me
    X5670 B1 @175x24=4.2GHz @1.24v LLC on
    Rampage III Extreme Bios 0003
    G.skill Eco @1600 (7-7-7-20 1T) @1.4v
    EVGA GTX 580 1.5GB
    Auzen X-FI Prelude
    Seasonic X-650 PSU
    Intel X25-E SLC RAID 0
    Samsung F3 1TB
    Corsair H70 with dual 1600 rpm fan
    Corsair 800D
    3008WFP A00



  8. #8
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    64k @ areca - whether its 1x ssd or 8x or anything in between

    256k @ highpoint/adaptec - has to be adjusted to the number of ssds
    Last edited by NapalmV5; 05-09-2009 at 10:21 AM.

  9. #9
    Xtreme Addict
    Join Date
    Jul 2006
    Location
    Vancouver, BC
    Posts
    2,061
    Quote Originally Posted by stevecs View Post
    With SSD's however, due to their blocking larger stripe sizes perform best (128K or larger). At least that has been what we've found so far in testing. Since blocking/# pages per block et al are different for different manufacturers and different models in different lines you should always test for actual results.

    for local results here it was touched on in http://www.xtremesystems.org/forums/...=219957&page=2 post #45 though in a more technical manor.
    I'm going to need a better explanation... can someone explain the "blocking" and "# of pages per block" concept to me and how that impacts stripe size?

    A pair of Intel X-25M SSD's are what I'm going to be using.

  10. #10
    Xtreme Addict
    Join Date
    Jul 2006
    Location
    Vancouver, BC
    Posts
    2,061
    Quote Originally Posted by Biker View Post
    Running 128k on both my arrays here... working great for me
    Why did you pick 128K? Did you try other stripe sizes? What were the results?

    Quote Originally Posted by NapalmV5 View Post
    64k @ areca - whether its 1x ssd or 8x or anything in between

    256k @ highpoint/adaptec - has to be adjusted to the number of ssds
    Can you explain the theory behind this? Any test results?

  11. #11
    Xtreme Addict
    Join Date
    Jul 2006
    Location
    Vancouver, BC
    Posts
    2,061
    Here's how I understand stipping at the simplest level...

    If the OS requests a 64K chunk of data from disk, this is what will happen with various stripe sizes...
    16K stripe = each disk has to seek and deliver two 16k chunks of data
    32K stripe = each disk has to seek and deliver one 32k chunk of data
    64K stripe = only one disk will have to seek and deliver a chunk of data

    Now, with magnetic storage, the 16k stripe is inefficient because each disk has to seek twice. The 64K stripe is not ideal either in this example because you are not gaining any parallelism from this data fetch. The 32K stripe is ideal in this example because you are maximizing parallelism but each disk only has to seek once.

    If you take the seek time out of it, then there is NO negative effects with going with a smaller strip size and the benefit is that when the OS requests smaller chunks of data, you will gain the benefits of added parallelism.

    A 128K stripe makes no sense to me at all because it means for almost all small data chunks requested from or written to disk, only one disk will be working and you are not gaining the benefits of paralellism from striping across drives. Such a large stripe size would only make sense if all of your data requests are significantly bigger than 128K.

    What's flawed with my thinking?

  12. #12
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Quote Originally Posted by virtualrain View Post
    Here's how I understand stipping at the simplest level...

    If the OS requests a 64K chunk of data from disk, this is what will happen with various stripe sizes...
    16K stripe = each disk has to seek and deliver two 16k chunks of data
    32K stripe = each disk has to seek and deliver one 32k chunk of data
    64K stripe = only one disk will have to seek and deliver a chunk of data

    Now, with magnetic storage, the 16k stripe is inefficient because each disk has to seek twice. The 64K stripe is not ideal either in this example because you are not gaining any parallelism from this data fetch. The 32K stripe is ideal in this example because you are maximizing parallelism but each disk only has to seek once.

    If you take the seek time out of it, then there is NO negative effects with going with a smaller strip size and the benefit is that when the OS requests smaller chunks of data, you will gain the benefits of added parallelism.

    A 128K stripe makes no sense to me at all because it means for almost all small data chunks requested from or written to disk, only one disk will be working and you are not gaining the benefits of paralellism from striping across drives. Such a large stripe size would only make sense if all of your data requests are significantly bigger than 128K.

    What's flawed with my thinking?
    it comes down to the controller.. thats why on areca controllers @ 64k stripe no matter the number of ssds.. itll take your ssd/s to its/their highest performance.. no controller comes close to the efficiency of areca controllers and thats why theyre the best ssd raid controllers

  13. #13
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Quote Originally Posted by virtualrain View Post
    Can you explain the theory behind this? Any test results?
    lol not theory.. the best results i got were @ those ^ stripes

    once you try diff stripes/controllers youll find out which ones better

  14. #14
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Strange not getting any e-mail updates again..

    @virtualrain- SSD's are comprised of numerous cells which either store 1 or 2 bits of information (SLC=1 bit, MLC=2bits). Those cells are combined into pages (could be any size based on manufacture but generally n^2 and generally small (2KiB or so) per chip. (There may be multiple chips/channels running in parallel so you can also think of this as an internal 'raid array'). Then pages are mapped out into blocks or defined units which could be in the hundreds or thousands of pages per block. So your block size (on the SSD) is akin to say a sector size on a HD. On a hard drive your sector size is say 512-524 bytes. So you're starting at your page size (and could be larger depending on the number of parallel paths though I haven't found any detailed data on that level of architecture to see what manufacturers are actually implementing).

    Now, your concept of stripe size is correct however your assumptions may be wrong (i.e. size of request). First, stripe size itself has NO BEARING on the request size. In your example you are stating that the OS requests a 64KiB of consecutive sectors, it is possible but that would pretty much never happen with OS partitions for example (average is closer to 8KiB and sometimes less, use diskmon to determine/watch your requests or perfmon). So yes if you match your request size to your stripe size (assuming no other lower level blocking is at play or your stripe size/request size is a multiple of your 'sector' (512-524bytes for traditional hard drives) or block size (up to a couple MiB for SSD's).

    The the LAST part is the killer with SSD's. As you are doing multiple levels of blocking here. (below file system level) you have: OS volume management (dynamic disks, lvm, whatever); raid (hardware or software but below the OS block API); SSD Drive ( block 'raid' ) which is comprised of multiple pages and is also then made up of multiple cells which may (and usually are for performance) stripped.

    Now for reads this is not much of an issue (or let's say it's not much of an issue for 'normal' use as the channel bandwidth is great enough that the differences in performance except for specialized apps is moot). (i.e. say > 1,000,000 iops or more). It's the writes (even simple ones are killers such as access time updates) which destroy performance and cojoin with the lack of parallelism.

    It's very similar to the concept with aligning partitions with underlaying structure (just doing it to multiple levels as there are multiple underlaying structures here). Generally speaking no-one really does this as it's time consuming and heavily dependent on your upper layers (raid, volume management, partitions et al). (they just add disks and cache to buffer the issues).

    The write hole is still the killer due to the tech used (NAND flash) which is based on the fact that you cannot overwrite a NAND cell. You have to erase the cell first which takes time. This is where and why pages/blocking and spare pages come into play. normally you would have (sake of argument) 1000 pages in a block (keep it simple and say it's SLC NAND). You have 800 pages used up in that block and want to delete 300 and write 200 pages from an OS perspective. The ssd will take the delete request and mark them as inactive (not removing any data just marking the cell as inactive). Then the write request comes in, now the block has 800 pages (600 active, 200 inactive) and the OS is requesting a write of 300. It can't do it to the 1000 pages it has (only 200 are marked as 'free') so the ssd has to copy all 600 active pages to temporary storage, erase the entire block, copy back the 600 active pages and then write your 300 new pages. Now as a way to mitigate this all manufacturers allocate more pages per block than what is presented to the host OS. I.e. same example you have 1000 pages in a block for user data, but the block's ACTUAL size is say 2000 pages. In the above example you had 800 pages used, you mark as inactive 200 pages so you have 600 active pages, and 200 pages as inactive. But out of a total of 2000 'real' pages (though only 1000 can be active at any one time) so your 300 page write request can be done WITHOUT the erase. You'll have 600+300new=900 active pages. You'll still have 200 pages marked as invalid. You'll still run into the problem of the erase cycle but it has been delayed a bit in this case. This is where the other mitigating idea of the TRIM command comes in. This if implemented in the SATA/SAS, SSD, LVM and File system locations (it has to be in all layers from the FS down to the SSD) the erase (if needed) will happen at the time of the DELETE not at the time of the WRITE as above. It still happens but the theory is that is more palatable to the end user to happen not when writing new data to the drive.


    I don't know if covered your questions in the way you wanted. It's a large field and the question is very open.

    @napalmv5- you hit on another aspect of this as well which makes most of the questions here hard to answer. In that raid/striping or really any api has implementation issues. Areca's perform better with certain stripe sizes in parity raids differently than non-parity, and different sizes also perform differently depending on many factors (scratch space, what type of algorithm they're using (is it 1x64, 2x32, 4x16 or whatever) and how that relates to internal bandwidth and processing power. That's where the old adage comes in 'world as designed, and world as built'.
    Last edited by stevecs; 05-09-2009 at 04:49 PM.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  15. #15
    Xtreme Addict
    Join Date
    Jul 2006
    Location
    Vancouver, BC
    Posts
    2,061
    Thanks for the explanation about pages and blocks. It's consistent with what I've read elsewhere.

    However, while the issues you raise with SSD's, particularly around writes, are characteristics worth noting, I don't see any relationship between some of these characteristics and the ideal stripe size debate.

    The benefits of RAID0 are increased parallelism... that is, you gain the benefits of fetching data from two or more drives in parallel or writing to two or more drives simultaneously. Right?

    Now, if we take an extreme case where the stripe size is say something ridiculous like 1GB... no one would ever use this... why? Because it would be incredibly rare for the OS to read or write more than a GB at a time so you would never see any benefits from multiple disks in RAID 0 since every read/write operation would be from a single disk.

    So, it follows that as you reduce the stripe size to something more useful you increase the chance that a piece of data you are reading or writing to disk will span across multiple volumes and therefore take advantage of the stripping.

    As you say, the average piece of data being written or read from disk is actually around 8KB (not 64 which I was merely using as an example). Well, you can see if your stripe size is 128KB and you are typically reading and writing 8KB pieces of data... how much of that is going to benefit from striping? Virtually none. The only benefit you will get is when there are multiple requests and they happen to fall on different physical disks... which admittedly is another way to exploit the parallelism of stripping but not the best way. The best way to ensure you are maximizing the capabilities of your stripped arrays would be to have the stripe size be some fraction of your average read/write request. No?

    It's been a long time since I read the theory on strip sizes... but I believe the old school thinking was that smaller stipes afforded more parallelism but with magnetic disks, they had the added issue of increased latency as each disk had to seek multiple times to satisfy a read/write request... so 64/N where N was the number of disks in the array was deemed a good compromise on stripe size between parallelism and seek times. What's odd, is that people familiar with this formula for ideal stripe size continue to push this theory with SSD's when it doesn't apply anymore. SSD's have virtually no seek time in comparison so following this old school theory for stripe size seems naive.
    Last edited by virtualrain; 05-09-2009 at 11:42 PM.

  16. #16
    Xtreme Member
    Join Date
    May 2007
    Posts
    191
    Quote Originally Posted by virtualrain View Post
    Thanks for the explanation about pages and blocks. It's consistent with what I've read elsewhere.

    However, while the issues you raise with SSD's, particularly around writes, are characteristics worth noting, I don't see any relationship between some of these characteristics and the ideal stripe size debate.

    The benefits of RAID0 are increased parallelism... that is, you gain the benefits of fetching data from two or more drives in parallel or writing to two or more drives simultaneously. Right?

    Now, if we take an extreme case where the stripe size is say something ridiculous like 1GB... no one would ever use this... why? Because it would be incredibly rare for the OS to read or write more than a GB at a time so you would never see any benefits from multiple disks in RAID 0 since every read/write operation would be from a single disk.

    So, it follows that as you reduce the stripe size to something more useful you increase the chance that a piece of data you are reading or writing to disk will span across multiple volumes and therefore take advantage of the stripping.

    As you say, the average piece of data being written or read from disk is actually around 8KB (not 64 which I was merely using as an example). Well, you can see if your stripe size is 128KB and you are typically reading and writing 8KB pieces of data... how much of that is going to benefit from striping? Virtually none. The only benefit you will get is when there are multiple requests and they happen to fall on different physical disks... which admittedly is another way to exploit the parallelism of stripping but not the best way. The best way to ensure you are maximizing the capabilities of your stripped arrays would be to have the stripe size be some fraction of your average read/write request. No?

    It's been a long time since I read the theory on strip sizes... but I believe the old school thinking was that smaller stipes afforded more parallelism but with magnetic disks, they had the added issue of increased latency as each disk had to seek multiple times to satisfy a read/write request... so 64/N where N was the number of disks in the array was deemed a good compromise on stripe size between parallelism and seek times. What's odd, is that people familiar with this formula for ideal stripe size continue to push this theory with SSD's when it doesn't apply anymore. SSD's have virtually no seek time in comparison so following this old school theory for stripe size seems naive.
    As a Raid0 user for many years, I am now on an X25-M raid0 stripe, my first on SSD's..
    The issue regarding the choice of stripe size have been debated for many years, and it's heating up on SSD's as well. I'm more or less in agreement with you regarding smaller stripes, what is the point spending that much money on a raid0 setup, when only 1 out of 10 OS files benefit from the parallelism. All though some hardcontrollers show abysmal results with smaller stripes, maybe an overworked IO processor, this is sertainly not the case with my ICH9R onboard controller with a QX9650@4.0ghz as a IO CPU. The CPU usage increases alot on smaller stripes, but the benches I care about and real world apps/games, show significant improvements on 16/32kb stripes compared to the standard 128kb. Blindly taking the road most used, 128kb stripe size, might be a safe road but maybe not the fastest one, on onboard Intel ICH atleast, even Intel has admitted this. Most of my OS writes will never benefit from even the smallest stripe, but will not get hurt by them either as small random file writes are blisteringly fast on X25's. Reads on the other hand feels like a million bucks, what more can one ask for... With new TRIM firmware from Intel to come, and 16/32kb stripes, I will enjoy this X25 array for a long time when Win7 is ready for prime time..
    | Ci7 2600k@4.6ghz | Asus SaberTooth P67 | Sapphire HD7970 | Samsung B555 32" | Samsung 840 PRO 128gb + 2xIntel SSD 520 120GB Raid0 + 2xC300 64GB Raid0 | Corsair Vengeance 16GB DDR3-1600 8-8-8-24 | Vantage GPU=40250 |

  17. #17
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    @the OP - bunch of people did testing on this on various forums. 128kb or larger has already been consistently shown to be the best.

  18. #18
    Xtreme Addict
    Join Date
    Jul 2006
    Location
    Vancouver, BC
    Posts
    2,061
    Quote Originally Posted by One_Hertz View Post
    @the OP - bunch of people did testing on this on various forums. 128kb or larger has already been consistently shown to be the best.
    On SSDs? With what controller? Also, keep in mind that synthetic benchmarks that read/write large chunks of data will of course, favor larger stripe sizes.

    Does a 128K or larger stripe size make any sense to you? How does this make sense?

  19. #19
    Xtreme Member
    Join Date
    May 2007
    Posts
    191
    Quote Originally Posted by virtualrain View Post
    On SSDs? With what controller? Also, keep in mind that synthetic benchmarks that read/write large chunks of data will of course, favor larger stripe sizes.

    Does a 128K or larger stripe size make any sense to you? How does this make sense?
    Some HW PCIe controllers does show that larger stripes gives best average performance even on SSD's, all though I'm not quite sure what they used to test this. In some benches my ICH9R almost doubled the performance compared to an expensive PCIe HW controller on 16kb stripe on both, and even if it was quite a bit better then mine at 128kb vs. 128kb stripes, it still was off my 16kb stripe pace... It might be as I speculated before, that the IO processor can't handle the smaller stripe size workload, and of course the type of SSD's might play a role here as well... My small stripe on ICH9R however, shows IOPS jump tremendously at 32-128kb files, with no ill effects on sequential reads/writes or 4kb +/- files random read/write. People need to learn that you can not conclude that this or that stripe size is the best for everyone based on the results on an particular controller, every controller must be tested individually, and the results is most likely just comparable to the identical controller and identical SSD's..
    Last edited by Ourasi; 05-10-2009 at 10:46 AM.
    | Ci7 2600k@4.6ghz | Asus SaberTooth P67 | Sapphire HD7970 | Samsung B555 32" | Samsung 840 PRO 128gb + 2xIntel SSD 520 120GB Raid0 + 2xC300 64GB Raid0 | Corsair Vengeance 16GB DDR3-1600 8-8-8-24 | Vantage GPU=40250 |

  20. #20
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Copied from an OCZ forum sticky. I could not find anything from Intel regarding the best strip size for the X25-E so I tested it myself with synthetic benchmarks and 128/256k seemed to be the best all round performers.

    Nand IC's used in SSD's have a 4KB page file.

    Let us start with our JMicron based SSD's

    The JMicron controller MCU (602) used in Core series, Solid series and Apex (2x in raid 0) works in the following way.

    602 SSD controller is basically 8bit MCU basis. internally 16bit operation. (In / Out)
    The controller is actually 4 chips embedded in 1 package so 4 MCU's.
    Flash chip provides multi page read and write. (Actually 2page) with Interleave.
    So, 2Byte * 4way * 2 way * 2 page = 32page operation for the maximum performance solution.

    32page = 32 * 4KB = 128KB stripe.

    So keeping things simple:

    Core and Solid series drives use 1xJM602 controller, with the controller being 4 controllers in 1 package they are actually in effect running raid0 with a 128k stripe.

    Now Apex uses a separate raid controller IC on the PCB of the SSD with 2x602 controllers. So stripe size will be 2x128k so 256k stripe.

    So Core and Solid 128k
    Apex................. 256k

  21. #21
    Xtreme Member
    Join Date
    May 2007
    Posts
    191
    Quote Originally Posted by audienceofone View Post
    Copied from an OCZ forum sticky. I could not find anything from Intel regarding the best strip size for the X25-E so I tested it myself with synthetic benchmarks and 128/256k seemed to be the best all round performers.
    On your controller, that is probably correct. However, as I said in my previous post, that is most likely just comparable to the exact same controller and drives, and may not even come close to the results found on another controller.. This is why it is important to test the controller you have rigorously before settling on a stripe size, and not copy someones settings as they might be on totally different hardware.

    I believe the OCZ sticky might hold water for their older drives and their limitations, but is pretty safe to ignore if you have newer SSD's and they consistently perform better on smaller stripes...
    | Ci7 2600k@4.6ghz | Asus SaberTooth P67 | Sapphire HD7970 | Samsung B555 32" | Samsung 840 PRO 128gb + 2xIntel SSD 520 120GB Raid0 + 2xC300 64GB Raid0 | Corsair Vengeance 16GB DDR3-1600 8-8-8-24 | Vantage GPU=40250 |

  22. #22
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    So which strip size has worked best for your set up?

  23. #23
    Xtreme Member
    Join Date
    May 2007
    Posts
    191
    Quote Originally Posted by audienceofone View Post
    So which strip size has worked best for your set up?
    After a lot of work testing 16kb, 32kb, 64kb and 128kb stripes, I found 16kb and 32kb to be the best, and 64kb just a bit behind, and 128kb to be pretty far behind, in benches/stopwatch in apps/tools/utilities I consider to be important for OS operation and work, and also game/map loading.
    In terms of IOPS on random read/write 4kb and other small file sizes they where all equal.

    At the moment I'm on 16kb stripe, and have been for a while now, and it is blisteringly fast in real world, not only in benches. There is not much performance difference between 16kb - 32kb - 64kb, but the smaller sizes edges ahead. As long as there is no measurable negative impact using these small stripes, I will continue using them. The X25-M is a perfect companion for those small stripes, some SSD's might not be. Intel was right about this in my opinion. But I must add: This is with my ICH9R, and I fully understand that these stripes might perform pretty bad on some controllers, and that's why I encourage people to test for them selves...
    | Ci7 2600k@4.6ghz | Asus SaberTooth P67 | Sapphire HD7970 | Samsung B555 32" | Samsung 840 PRO 128gb + 2xIntel SSD 520 120GB Raid0 + 2xC300 64GB Raid0 | Corsair Vengeance 16GB DDR3-1600 8-8-8-24 | Vantage GPU=40250 |

  24. #24
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Ourasi View Post
    After a lot of work testing 16kb, 32kb, 64kb and 128kb stripes, I found 16kb and 32kb to be the best, and 64kb just a bit behind, and 128kb to be pretty far behind, in benches/stopwatch in apps/tools/utilities I consider to be important for OS operation and work, and also game/map loading.
    In terms of IOPS on random read/write 4kb and other small file sizes they where all equal.

    At the moment I'm on 16kb stripe, and have been for a while now, and it is blisteringly fast in real world, not only in benches. There is not much performance difference between 16kb - 32kb - 64kb, but the smaller sizes edges ahead. As long as there is no measurable negative impact using these small stripes, I will continue using them. The X25-M is a perfect companion for those small stripes, some SSD's might not be. Intel was right about this in my opinion. But I must add: This is with my ICH9R, and I fully understand that these stripes might perform pretty bad on some controllers, and that's why I encourage people to test for them selves...
    Interesting, especially as you have checked with both benchmarks and a stop watch. If I go back to onboard raid I will certainly revaluate the stripe size that I use.

    Have you seen this post on raid sizes with ICHR9 btw?

    http://www.xtremesystems.org/forums/...d.php?t=208896

    I also undertook a number of benchmarks with different stripe sizes:

    http://www.xtremesystems.org/forums/...=219707&page=2

    My results got a bit slated.....invalid benchmarks, cache results etc....but in real life there was no perceivable difference between one stripe size and another. (Although I did not use a stop watch)

    Is part of the issue here that the X25 controller combines small files into larger files and that very small writes can end up being a 512KB write on an untrimmed drive?
    Last edited by Ao1; 05-10-2009 at 03:02 PM. Reason: fixed link

  25. #25
    Xtreme Addict
    Join Date
    Jul 2006
    Location
    Vancouver, BC
    Posts
    2,061
    Quote Originally Posted by audienceofone View Post
    Copied from an OCZ forum sticky. I could not find anything from Intel regarding the best strip size for the X25-E so I tested it myself with synthetic benchmarks and 128/256k seemed to be the best all round performers.

    Nand IC's used in SSD's have a 4KB page file.

    Let us start with our JMicron based SSD's

    The JMicron controller MCU (602) used in Core series, Solid series and Apex (2x in raid 0) works in the following way.

    602 SSD controller is basically 8bit MCU basis. internally 16bit operation. (In / Out)
    The controller is actually 4 chips embedded in 1 package so 4 MCU's.
    Flash chip provides multi page read and write. (Actually 2page) with Interleave.
    So, 2Byte * 4way * 2 way * 2 page = 32page operation for the maximum performance solution.

    32page = 32 * 4KB = 128KB stripe.

    So keeping things simple:

    Core and Solid series drives use 1xJM602 controller, with the controller being 4 controllers in 1 package they are actually in effect running raid0 with a 128k stripe.

    Now Apex uses a separate raid controller IC on the PCB of the SSD with 2x602 controllers. So stripe size will be 2x128k so 256k stripe.

    So Core and Solid 128k
    Apex................. 256k
    After reading this many times to try and figure out what is being said... I think it's trying to say that the maximum data that could be written to/from the flash media in one operation is 128K... While this is good, the best stripe size is actually more dependent on the average read/write operation that the OS requests... not on what the drive can handle. Steve said above that the OS typically reads/writes small pieces of data (as small as 8KB). If this is true, a 128K stripe will never yield any parallelism performance gains... NONE!

    As I said before, benchmarks are meaningless unless they emulate the kinds of data reads/writes that transpire in typical usage. Of course a benchmark tool that reads/writes data in 512K blocks will show that a 128K stripe is the best.

    I think Ourasi is on the right track.

Page 1 of 3 123 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •