Page 1 of 6 1234 ... LastLast
Results 1 to 25 of 136

Thread: Raid 0 - Impact on NAND Writes

  1. #1
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597

    Raid 0 - Impact on NAND Writes

    Here I experiment with raid 0 to see the impact on writes to nand with a 128K and 4K Stripe.

    The Set Up
    Frankenstein R0
    Vertex 2 and Vertex 3
    User capacity 74.5GiB
    Test files size: 74.43 GiB (4MB block size)
    Work load: 4K random full span 100% incompressible
    Drive in steady state

    Stripe: 128K SMART readings before starting:

    Vertex 2
    E9 41216
    EA 44672

    Vertex 3
    E9 41600
    EA 41552

    SMART readings after:

    Vertex 2
    E9 42112
    EA 44864

    Vertex 3
    E9 41923
    EA 41726

    Amount of data written = 279,135.94MiB + 76,216.32MiB = 355,352.26MiB/ 347.02GiB

    Nand Writes to Vertex 2 = 896GiB
    Nand Writes to Vertex 3 = 323GiB
    Total writes to Nand = 1,219GiB

    Host Writes to Vertex 2 = 192GiB
    Host Writes to Vertex 3 = 174GiB
    Total writes to Host = 366GiB

    Ratio between nand writes and host writes = 1:0.30
    Speed at end of run = 10.03MB/s

    Stripe: 4K SMART readings before starting:

    Vertex 2
    E9 42,112
    EA 44,864

    Vertex 3
    E9 41,936
    EA 41,734

    SMART readings after:

    Vertex 2
    E9 43072
    EA 44992

    Vertex 3
    E9 42,252
    EA 41,903

    Amount of data written = 268,862 MiB + 76,216.32MiB = 345,078.32MiB/ 336.99GiB

    Nand Writes to Vertex 2 = 960GiB
    Nand Writes to Vertex 3 = 316GiB
    Total writes to Nand = 1,276GiB

    Host Writes to Vertex 2 = 128GiB
    Host Writes to Vertex 3 = 169GiB
    Total writes to Host = 297GiB

    Ratio between nand writes and host writes = 1:0.23
    Speed at end of run = 12.75MiB/s

    Stripe: 128K SMART 4k random full span. 50% OP - readings before starting:

    Vertex 2
    E9 44,096
    EA 45,760

    Vertex 3
    E9 43,131
    EA 42,670

    SMART readings after:

    Vertex 2
    E9 44,224
    EA 45,888

    Vertex 3
    E9 43,268
    EA 42,789

    Amount of data written = 206,573.44MiB + 3,7785.6MiB = 238.63GiB

    Nand Writes to Vertex 2 = 128GiB
    Nand Writes to Vertex 3 = 137GiB
    Total writes to Nand = 265GiB

    Host Writes to Vertex 2 = 128GiB
    Host Writes to Vertex 3 = 119GiB
    Total writes to Host = 247GiB

    Ratio between nand writes and host writes = 1: 0.93
    Speed at start of run = 48.95MB/s
    Speed at end of run = 37.71MB/s

    ASU Endurance App (Client not enterprise workload) Drives SE’d. Stripe: 128K SMART readings before starting:

    Vertex 2
    E9 43,392
    EA 45,248

    Vertex 3
    E9 42,529
    EA 42,170

    SMART readings after:

    Vertex 2
    E9 43,648
    EA 45,440

    Vertex 3
    E9 42,717
    EA 42350

    Amount of data written = MiB = 356.13GiB

    Nand Writes to Vertex 2 = 256GiB
    Nand Writes to Vertex 3 = 188GiB
    Total writes to Nand = 444GiB

    Host Writes to Vertex 2 = 192GiB
    Host Writes to Vertex 3 = 180GiB
    Total writes to Host = 372GiB

    Ratio between nand writes and host writes = 1:0.84
    Speed at end of run = 66.34 MiB/s


    Single Drive - Vertex 2 4k random full span. 14.65GB user space, 22.62GB OP- readings before starting:

    E9 44,224
    EA 45,888

    SMART readings after:

    E9 44,480
    EA 46,080


    Amount of data written = 206,555.08MiB + 14,899.2MiB = 216.26GiB

    Nand Writes to Vertex 2 = 256GiB
    Host Writes to Vertex 2 = 192GiB

    Ratio between nand writes and host writes = 1:0.75
    Speed at end of run = 31.12 MB/s

    Single Drive Vertex 3 4k random full span no OP- readings before starting:

    E9 43,268
    EA 42,789

    SMART readings after:

    E9 44,010
    EA 42,926

    Amount of data written = 82,850.78MiB + 57139.20MiB = 136.70GiB

    Nand Writes to Vertex 2 = 742GiB
    Host Writes to Vertex 2 = 137GiB

    Ratio between nand writes and host writes = 1:0.18
    Speed at end of run = 31.12 MB/s

    Summary



    TRIM is not effective if you have a 4K random full span work load.

    Over provisioning had a significant impact on reducing wear whilst at the same time increasing write speeds

    With a R0 4K array it took around 431GB of writes for the drive to get to a steady state when running the ASU client workload. Write speeds progressively fell by 26%. (See post #20)

    When running the 4K random write work load, write speeds dropped by 50% after 2 minutes and started to fluctuate. After a further 4 minutes writes speeds dropped by 50% again and fluctuated more. (See post #42)

    Read speeds were significantly impacted when the drive was in a degraded state. (134MB/s on a R0 array) Steady state performance came out at around 290MB’s for reads.
    Last edited by Ao1; 11-08-2011 at 02:14 AM.

  2. #2
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    could that be throttling at all? doiubt it with the low usage so far though. Very interesting testing! subbed
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  3. #3
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    100,000 MiB and write speed is now 8.34mb/s. (I don't think it has anything to do with LTT).

  4. #4
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    wow that is terrible tbh. Now, do you feel that might have something to do with the inability of SF to handle incompressible data effectively, or just reflective of results with most SSDs?

    There is an effect on the performance of the write speed of typical SSDs during long testing scenarios with writes, that is where enterprise devices shine, they are truly optimized for it. Here is an article, and a snip, that i have had in my read list for awhile now. I am going to do some variant of this type of testing for a future review of lightning 6gb/s devices.

    This is with single devices, the raid array responses will be vastly different of course.
    www.itworld.com/storage/187659/review-enterprise-solid-state-drives

    Last edited by Computurd; 11-05-2011 at 11:12 AM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  5. #5
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I’m using a special enterprise version of Anvil’s app. The write speed seems to have levelled out now. This is a torture test for sure. Without TRIM or idle time the drive is incurring the max read-modify-write penalty. A single drive with TRIM did not behave this way.

    I don’t know if I could bring myself to inflict this kind of test on my Intel’s. On the other hand it would be interesting to see how they performed in a R0 set up.

    The real reason to do this however is to see how many writes end up on each SSD. When I’ve done with 4K I’ll run the normal version of Anvil’s app to see if a mainly sequently loads makes any difference.

  6. #6
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    There will be an option to fill the drive using 4KB blocks (instead of 4MB) and that really makes a difference for some of the drives.
    -
    Hardware:

  7. #7
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Write speed has improved a little. It's now 10.08mb/s with 268,750MiB of data. As it is taking so long I'm tempted ot break up the array and check the SMART values

  8. #8
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    That would be interesting

    Please do a HDTune Benchmark using 4MB block size, seq reads should have taken quite the dive vs in fresh state.
    -
    Hardware:

  9. #9
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Any special settings on HD Tune? Full or partial? Fast or accurtate?
    Last edited by Ao1; 11-05-2011 at 02:03 PM.

  10. #10
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Just the 4MB block size (found in settings) and a full span read test -> Benchmark tab
    -
    Hardware:

  11. #11
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    HD Tune said it would not run with an active partition so I deleted it but it still would not run. After reinstating the partition AS SSD gave me:
    Write 14.06
    Read 134.00

    Going to break the array up now to check the writes



    Last edited by Ao1; 11-07-2011 at 03:49 AM.

  12. #12
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The reads are just as important (if not more so) as the writes and the test shows that it's highly degraded, I'd say 134MB/s is as expected from such an exercise. (unfortunately)

    HDTune won't let you run a write test with an active partition but the read test should work, otherwise there is something seriously wrong.

    edit:
    Next time you should try deleting the file from the Endurance app and then Fill the drive it using 4MB blocks and then do a HDTune read test. It should show how quickly it restores performance, reads in particular.
    Last edited by Anvil; 11-05-2011 at 02:27 PM.
    -
    Hardware:

  13. #13
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    First post updated with results. 4K stripe comming up next (after a couple of HD Tune read benchmarks).

  14. #14
    Xtreme Member
    Join Date
    Sep 2009
    Posts
    190
    Quote Originally Posted by Ao1 View Post
    As it is taking so long I'm tempted ot break up the array and check the SMART values
    Can you not read SMART while in RAID0?

  15. #15
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Now running with a 4K stripe. I let the drive idle for a few hours and formatting the drive would have generated a trim operation, but the write speeds were still heavily degraded when I started the test. The sequential test file speed came out at 15.41MB/s (compared to 100.34MB/s when I ran the 128K stripe), although the 4K stripe seems to work a bit better with the 4K work load. 100,000MiB in and its sitting at 13.75MB/s (compared to 8.34MB/s with the 128K stripe).

    The graph shows the slowdowns that occur as the drive tries to clear a path ahead of writing.

    When I’m done with this run I will S/E the drives and then see how long it takes to get to a fully degraded state.

    Quote Originally Posted by some_one View Post
    Can you not read SMART while in RAID0?
    Sadly no and I can’t find a way to get around it.



    Last edited by Ao1; 11-07-2011 at 03:49 AM.

  16. #16
    Xtreme Member
    Join Date
    Sep 2009
    Posts
    190
    Have you tried smartmontools?

    What does running "smartctl --scan" show?

  17. #17
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    First post updated. The 4K strip was marginally faster, but the ratio between nand/host writes was worse, which may have been attributable due to the fact that the drive was in a degraded state on this run.

    Drives S/E’d and running 4K random to see how long it takes for the drive to degrade. (It started to degrade after 5 minutes. Currently 227 seconds in and avg writes speeds are at 14MB/s.
    Last edited by Ao1; 11-06-2011 at 07:34 AM. Reason: typos

  18. #18
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    OK, disks secure erased, reset in raid 0 with a 4K stripe.

    Avg to fill the drive = 144.37MiB/s.

    Fully degraded state within 362.45 seconds & 24.15GiB (+ test file 74.43GiB)

    Last edited by Ao1; 11-07-2011 at 03:50 AM.

  19. #19
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Interesting tests, yet somehow predictable. 4K writes full span with no static data should lead to WA around 10-20 and maybe even higher degradation than what you see. For such a scenario, only larger spare area should help. I believe trim is useless, because there would be no complete free block to be cleared. You could do SE and decrease partition size after to simulate extra spare area.

  20. #20
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    ASU Standard Endurance Test 12GiB free. 100% incompressible.
    Total Time: 1.25 hours
    R 0 4K Stripe. MB/s from fresh to steady state

    Last edited by Ao1; 11-07-2011 at 03:50 AM.

  21. #21
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by sergiu View Post
    Interesting tests, yet somehow predictable. 4K writes full span with no static data should lead to WA around 10-20 and maybe even higher degradation than what you see. For such a scenario, only larger spare area should help. I believe trim is useless, because there would be no complete free block to be cleared. You could do SE and decrease partition size after to simulate extra spare area.
    I think there might have been some static data when I ran the same test on a single drive. Can’t remember but will check later, however the single drive with trim performed significantly better. Avg MB/s after 0.7TiB of data = 28 compared to 13 with a 4K stripe.
    I'm running the same tests with the normal version of the endurance app now so let’s see what happens then.

  22. #22
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Anvil, here is a HD Tune run straight after the endurance app run above. (4MB)

    Last edited by Ao1; 11-07-2011 at 03:50 AM.

  23. #23
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    Hong Kong
    Posts
    1,905
    Could someone put the results that have so far been observed into more layman terms? I'm having a bit of trouble following.
    -


    "Language cuts the grooves in which our thoughts must move" | Frank Herbert, The Santaroga Barrier
    2600K | GTX 580 SLI | Asus MIV Gene-Z | 16GB @ 1600 | Silverstone Strider 1200W Gold | Crucial C300 64 | Crucial M4 64 | Intel X25-M 160 G2 | OCZ Vertex 60 | Hitachi 2TB | WD 320

  24. #24
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by CedricFP View Post
    Could someone put the results that have so far been observed into more layman terms? I'm having a bit of trouble following.
    Moved to 1st post
    Last edited by Ao1; 11-07-2011 at 01:48 PM. Reason: Moved to 1st post

  25. #25
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post
    I think there might have been some static data when I ran the same test on a single drive. Can’t remember but will check later, however the single drive with trim performed significantly better. Avg MB/s after 0.7TiB of data = 28 compared to 13 with a 4K stripe.
    I'm running the same tests with the normal version of the endurance app now so let’s see what happens then.
    With static data around, ratio between spare area and usable area for 4K full span is higher which would lead also to better performance. Trimming might have helped because of the file size (4MiB which is equal to 512 or 1024 pages) possibly at the cost of higher WA. But for 100% 4K writes full span where drive would be seen as a single file in which pages are constantly overwritten, I cannot see how trim will help, as each overwritten page would free only one. But who knows, maybe with trim help, wear leveling algorithm would choose better candidates for erasing.

Page 1 of 6 1234 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •