Page 2 of 6 FirstFirst 12345 ... LastLast
Results 26 to 50 of 136

Thread: Raid 0 - Impact on NAND Writes

  1. #26
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    New results based on ASU Client added to first post.

    Write distribution between the drives is a lot better, but both drives are still incurring writes in excess of the amount of data written.

    Also it seems that after reaching a steady state there is a further degraded state down the line – Avg 66.34MB/s = write speeds 50% below steady state after writing ~300GiB.

    Last edited by Ao1; 11-07-2011 at 03:48 AM.

  2. #27
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by sergiu View Post
    With static data around, ratio between spare area and usable area for 4K full span is higher which would lead also to better performance. Trimming might have helped because of the file size (4MiB which is equal to 512 or 1024 pages) possibly at the cost of higher WA. But for 100% 4K writes full span where drive would be seen as a single file in which pages are constantly overwritten, I cannot see how trim will help, as each overwritten page would free only one. But who knows, maybe with trim help, wear leveling algorithm would choose better candidates for erasing.
    I'm going to re-test a single drive later. I believe it might have had 50% static data on it.
    Last edited by Ao1; 11-06-2011 at 10:45 AM.

  3. #28
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'll send you a link to a slightly adjusted app.
    (some minor adjustments)
    -
    Hardware:

  4. #29
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by sergiu View Post
    With static data around, ratio between spare area and usable area for 4K full span is higher which would lead also to better performance. Trimming might have helped because of the file size (4MiB which is equal to 512 or 1024 pages) possibly at the cost of higher WA. But for 100% 4K writes full span where drive would be seen as a single file in which pages are constantly overwritten, I cannot see how trim will help, as each overwritten page would free only one. But who knows, maybe with trim help, wear leveling algorithm would choose better candidates for erasing.
    You are right. I'm running the enterprise app (4K random) on the V2 & V3 as single drives with no static data and the drives are degrading just as badly as the R0 array. I'm down to 13MB/s on both drives after only 10GB of data

    edit. lol, after 16GB I'm down to 10MB/s on both drives. Worse that R0. Maybe with R0 the writes are getting cached. I'm going to create a R0 array but this time I'll set aside 50% for OP.
    Last edited by Ao1; 11-06-2011 at 12:53 PM.

  5. #30
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Let it run for a few hundred's of GB, as I have the feeling the speed will stabilize to an even lower value. Also, what is WA value? For the WA, please take the readings after write speed stabilization, as I believe instant WA increases as write speed decreases. With 50% OP you should see a huge difference in write speed, but what would be more interesting to follow is average/max write latency between those states.

  6. #31
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Stroll on. R0 128K stripe 50% OP 4K random "full" span

    Last edited by Ao1; 11-07-2011 at 03:48 AM.

  7. #32
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    ^ I'll go back to the single drives later as they will take forever at that speed. (and I'll monitor avg/ max latency as well. I already know its going to get very high).

    R0 128K stripe 50% OP 4K random "full" span





    Uploaded with ImageShack.us
    Last edited by Ao1; 11-06-2011 at 01:49 PM.

  8. #33
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    What QD is used for generating 4K writes? If it is 1, then it would be also interesting to see what happens with higher QD values, although I suspect aggregated speed will not increase

  9. #34
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    First post updated. Using 50% OP transformed the outcome. Hard to believe it had that much impact.




    @ Sergiu I believe it is all qd1
    Last edited by Ao1; 11-07-2011 at 03:48 AM.

  10. #35
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Vertex 2 - 4K random - 14.65GB user space, 22.62GB OP



    Meanwhile the Vertex 3 is struggling. 4K random – no OP


  11. #36
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    At least is still much faster than a RAID array based on normal HDDs. Yet the difference is impressive. Luckly there is no "normal" usage pattern that would lead to something like that

  12. #37
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    OT: I'd be quite worried about "technological progress" if a newer (Vertex 3) SSD of the same company (OCZ) is outperformed heavily by an earlier generation SSD (Vertex 2).
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  13. #38
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by alfaunits View Post
    OT: I'd be quite worried about "technological progress" if a newer (Vertex 3) SSD of the same company (OCZ) is outperformed heavily by an earlier generation SSD (Vertex 2).
    It’s not. The difference is over provisioning. I can't believe how much difference it makes. It’s as if the work load was converted to sequential. Now I can see why TRIM for raid is not on the top of developers list. With a random work load it makes no difference. With raid you get cache to help alleviate random writes, plus with OP it doesn’t matter anyway.

    I’m finishing off the 4K random on the V3 and then I will run again but with static data. Let’s see how good the wear levelling is. From initial testing it’s not as good as OP, but it’s not far behind.

  14. #39
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I had to reboot before I got to 200GB First post updated.


  15. #40
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Wow... so speed dropped even further. Could you look for WA for next 50-100GB? Now write speed should have been stabilized. Also, there might be a difference in speed in favour of Vertex 2 because of NAND geometry in worst case scenario. Having 512KB blocks instead of 2MB ones (or 1MB) might help a little when recycling. However these might be negated by advancements in controller and better NAND write speed.

  16. #41
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    V2 with ~50% static data. So far holding out better when compared to no static data, but not as well as OP.


  17. #42
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    The graph is lacking in granularity, but it was the only way I could find to show the stepped drop in performance that occurs with 4k random (Vertex 3, no OP, full span on a SE’d drive). Write speeds are consistent for the first 2 minutes and then they drop by around 50% but fluctuate below. After 6 minutes write speeds fluctuate considerably. On avg I’d say another 50% drop.


  18. #43
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is a direct comparison between a drive with no static data and an empty drive. (tests started around the same time) Having static data certainly helps, but it’s not as good as OP.



    Here are the response times of the V3 taken as the drive started to degrade. 83.44% of writes below 10MB/s and 41.28% of (4K) xfers between 500us and 1ms. I’ll check if I get time but I think the V3 is much better with response times compared to the V2.

    Last edited by Ao1; 11-07-2011 at 06:36 AM.

  19. #44
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Getting really interesting

    You should be able to read SMART even if the drives are in raid, I can read my m4's using the Intel SSD Toolbox, will try a small array of SF drives later.
    (the question is will all attributes display or only the ones that Intel uses?)
    -
    Hardware:

  20. #45
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    So with 50% static data long term performance is double and writes to nand are reduced. I'm wondering how this fits into what could be observed with the B1 values on SymbiosVyse's drive (that had no static data).



    Quote Originally Posted by Anvil View Post
    Getting really interesting
    Thanks Anvil, this has been quiet hard work. (Time intensive) It's been a bit of an eye opener for me though so I'm glad I did it.
    Last edited by Ao1; 11-07-2011 at 09:56 AM.

  21. #46
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by sergiu View Post
    Wow... so speed dropped even further. Could you look for WA for next 50-100GB? Now write speed should have been stabilized. Also, there might be a difference in speed in favour of Vertex 2 because of NAND geometry in worst case scenario. Having 512KB blocks instead of 2MB ones (or 1MB) might help a little when recycling. However these might be negated by advancements in controller and better NAND write speed.
    There you go. (Ignore the elapsed time as I had to pause briefly a few times.).
    Ratio between nand writes and host writes = 1:0.148 (nasty)


  22. #47
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post
    The reads are just as important (if not more so) as the writes and the test shows that it's highly degraded, I'd say 134MB/s is as expected from such an exercise. (unfortunately)

    HDTune won't let you run a write test with an active partition but the read test should work, otherwise there is something seriously wrong.

    edit:
    Next time you should try deleting the file from the Endurance app and then Fill the drive it using 4MB blocks and then do a HDTune read test. It should show how quickly it restores performance, reads in particular.
    This is on a single V3. When I stopped the 4K writting I deleted the test file and started up HD Tune. Looks like a TRIM operation cleared up drive to allow read speeds to return.





    (Edit: Note only one IOP for the TRIM command. That is why the drive appears to hang. The hole drive is cleaned up in one go)
    Last edited by Ao1; 11-07-2011 at 12:19 PM.

  23. #48
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    It cleans up nicely, 20-21 seconds isn't that bad for a full TRIM.

    I expect this is on a 3Gb/s controller?
    6Gb/s will probably show marginally better results.
    -
    Hardware:

  24. #49
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Yep it’s a 3Gb/s controller. Still for 4k, bandwidth limits are not such a problem (Who knows for how much longer. Maybe its not so far away).
    I was going to update to the X79, but I’m not so sure now. Might wait another gen.

  25. #50
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post
    There you go. (Ignore the elapsed time as I had to pause briefly a few times.).
    Ratio between nand writes and host writes = 1:0.148 (nasty)
    That's quite a good ratio for such a test, much better than what I expected! I would expected something like 1:0.1-1:0.03. But, most probably you need to know fine details of the wear leveling algorithm to design something against it.

Page 2 of 6 FirstFirst 12345 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •