Page 3 of 6 FirstFirst 123456 LastLast
Results 51 to 75 of 136

Thread: Raid 0 - Impact on NAND Writes

  1. #51
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The X79 refresh might be the one to wait for

    I just checked and the Intel SSD Toolbox does read the attributes correctly while the drives are in raid.

    As Intel counts every value in F1/F2 as 32MB you need to convert the value shown by the Toolbox.
    In my case F1 shows as 22GB and multiplied by 32 it equals 704 which is the value shown by CDI and OCZ Toolbox.


    Last edited by Anvil; 11-07-2011 at 01:00 PM.
    -
    Hardware:

  2. #52
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is a chart to summerise the results of testing the ratio between Nand & Host Writes

    Last edited by Ao1; 11-07-2011 at 01:36 PM.

  3. #53
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    That chart shows exactly why Intel focuses on over-provisioning for Enterprise applications

    (except for the last entry, what's the difference between the last 2 entries?)
    -
    Hardware:

  4. #54
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    ^ fixed

  5. #55
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    That explains it (no OP)

    Now, if you only had more drives for testing purposes, where can we find some?
    In particular the Intel controller and the Marvell controller would have been very interesting vs the SF controller.
    (also, the new Samsung controller looks strong)
    -
    Hardware:

  6. #56
    Xtreme Member
    Join Date
    Sep 2009
    Posts
    190
    Quote Originally Posted by Anvil View Post
    I just checked and the Intel SSD Toolbox does read the attributes correctly while the drives are in raid.
    I did suggest some posts back to try smartmontools, don't know if anyone did try it. Although I'm not that familiar with smartmontools as it's easier for me to use my own very untidy software to read SMART through RAID, smartmontools does seem to work okay on my RAID system and it also provides logging SMART values to file at intervals with the proper command line / config file.

  7. #57
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Intel’s toolbox can indeed read the SMART values for OCZ drives in raid and the values appear the same as what the OCZ toolbox or any other SMART reader would present. I wasn’t too worried about reading the array as I had to break it up to SE the drives between different tests. That said it’s very handy to be able to read values without splitting the array. I'm working on read speeds now.

  8. #58
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    R0 128K stripe. User capacity reduced from 74.5GiB to 49.9GB to provide ~25GB of OP
    4K full span writes – 146,725 MiB
    Delete test file (4MB blocks)
    Run HD Tune
    Now refilling with 4MB blocks to recheck read speeds.


  9. #59
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Refilled array with 4MB blocks. Deleted and then re-ran HD Tune.
    Write speeds to fill the array with 4MB blocks came out around ~39MB/s, which is very slow compared to 144.48MiB/s when the drives are in a fresh state.


  10. #60
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    3rd time - Refill array with 4MB blocks. Delete and then re-run HD Tune.
    Write speeds to fill the array with 4MB blocks came out around ~80MB/s.


  11. #61
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    4th time - Refill array with 4MB blocks. Delete and then re-run HD Tune.
    Write speeds to fill the array with 4MB blocks came out around ~102MB/s.


    Last edited by Ao1; 11-08-2011 at 04:58 AM.

  12. #62
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    5th time – Seems to have got to as good as it can get.
    Write speeds to fill the array with 4MB blocks came out around ~98MB/s.

    Last edited by Ao1; 11-08-2011 at 05:07 AM.

  13. #63
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    R0 just after a SE. Read speed degradation sucks with SF drives.


  14. #64
    Xtreme Member
    Join Date
    Jun 2011
    Posts
    145
    Read speed degradation: Is that b/c of the lack of TRIM, or just the nature of SF drives?

  15. #65
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    It’s the nature of SF drives. With a single drive read speeds were reinstated more or less immediately after a TRIM operation (see post # 47).

    Writing large blocks (4MB) helps to clear up fragmented blocks, but as can be seen it does not get the drive back to optimum performance.

    I thought R0 would be bad for writes, but it’s actually beneficial if working with a highly random work load, especially if you have OP. Reads on the other hand is not good at all. That is just a quirk however with SF drives.

  16. #66
    Xtreme Member
    Join Date
    Jun 2011
    Posts
    145
    Thanks for all your work and clarification. I guess that is why the OCZforums are such proponents of secure erasing drives in an array.

  17. #67
    Xtreme Member
    Join Date
    Sep 2009
    Posts
    190
    Ao1, what data were your 4MB blocks that you wrote consisted of?

  18. #68
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I used Anvils Enterprise App to generate the 4MB blocks, which filled the drive before being deleted. I believe the block were uncompressible. Anvil will be able to elaborate more.

  19. #69
    Xtreme Member
    Join Date
    Sep 2009
    Posts
    190
    Thanks.

    Quote Originally Posted by Ao1 View Post
    R0 just after a SE. Read speed degradation sucks with SF drives.
    If I understand correctly after SE or TRIM your not actually reading any media during reads to those pages/blocks that were Se'd or trimmed, instead your getting DRAT although on my own SF system it seems like ZRAT but according to the ATA IDENTIFY ZRAT isn't supported. Maybe they changed the spec or it was never implemented with the firmware, idk.

    So idk if in that case I would call it degradation as your not reading anything from the media. I was hoping to see if a forced TRIM could be accomplished for RAID 0 but after thinking about it maybe just a zero fill would be sufficient. Since the compression ratio for zero fill is ~7:1 then filling up the deleted clusters of the array with zero fill data should give back some of the media to GC. For instance, if you were to fill your 80GB array with zero's it may only fill ~12GB of media leaving the other 68GB to be returned to GC. How long GC takes idk. What do you think?

    DRAT - Deterministic Return After Trim
    ZRAT - Zero Return After Trim

  20. #70
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    ^ Let’s see

    R0 with no OP. 12GB of 4K random writes over 5 minutes. That takes the controller from a fresh state to the 1st stage of degradation. (4K write speeds were still at 35.87mb/s)

    Here is a HD Tune shot straight after the writes. Now I will leave the drive to idle for 24 power on hours and then I will re-run HD Tune.


  21. #71
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    What read speed do you get if you just fill the drive with 4MB blocks of incompressible data.

    @some_one
    The compression used is user selectable, the same settings are used throughout the application.
    -
    Hardware:

  22. #72
    Xtreme Member
    Join Date
    Sep 2009
    Posts
    190
    I did a Secure Erase and created one 90GiB volume on the RAID 0 array with two partitions. W7 installed on the first partition and W8 on the second. HDTune scale is in GB not GiB.




    Next I created 3x 1GiB 0-fill files from the W8 OS and written to the W7 partition. Notice the difference in read speed at the LBA locations for those files which is the same whether the files are deleted or not. As there is no TRIM pass-through to the array the flash media has to retain the data for those LBA's. If TRIM worked I would expect those LBA's to return to 900MB/s on read if there are no writes to them.




    I also ran 9x 1GiB FF-fill files, similar results.




    Should be interesting to see your test of write degradation and if filling the volume with 0-fill data does free up media to be used in the free pool.
    Write speeds might start off high then drop off as the free media is depleted. I guess with a bigger pool of free media that will increase the time before degradation takes place. On a normal system that would mean longer bursts of high write speed with idle time used to replenish the pool. How much that pool can be replenished will depend on how many different clusters have been dirtied (written too) and the compression ratio.

    Well, the theory is one thing but I wonder if it is like that in practice.


  23. #73
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    While working on this part of the utility I played with different controllers and from what I recall the m4 cleaned up nicely (in raid) just by writing sequentially.
    I'll try to find the screen-shots later today.
    -
    Hardware:

  24. #74
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    So no one spotted my deliberate mistake? I benched one of my X25-M’s by mistake in post #70.

    Fresh start. SE’s drives R0. No OP

    HD Tune straight after a SE with no data on the drive.


    HD Tune straight after filling the drive with 4MB blocks. Wow, didn’t expect that. A 47% drop in read performance after writing to all LBA’s once with 4MB blocks.


    HD Tune straight after 14,014MiB of 4K random writes


    Delete test file. (Drive is now all free space.)


    Now I will run HD Tune every hour to see how quickly GC works.

  25. #75
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    One hour in. Absolutely no difference in the benchmark.

Page 3 of 6 FirstFirst 123456 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •