Page 10 of 220 FirstFirst ... 789101112132060110 ... LastLast
Results 226 to 250 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #226
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post
    From what I've read, the SF controller doesn't tolerate high deltas on least/most worn "flash blocks", meaning that it starts shuffling static data when needed, don't know about other controllers, there may be some static wear-leveling but we'll probably never know.
    I didn't think that any of the G2 drives could do static data rotation, although I have heard talk of it. For sure the X25-M can't do it, maybe the 320 can however.

    12GB of data on the SF drive has only been written to once. If that 12GB of NAND could be swapped as the rest wears down it would extend the life quite a bit.

    Without static data rotation the MWI will get to 0, but there will still be 12GB of unscathed NAND. That is a likely real life scenario as well.

  2. #227
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I think the Intel (and most others) do static data rotation, I can't see how it would work if they didn't.

    SandForce static data rotation

    It does require idle periods though
    -
    Hardware:

  3. #228
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,796
    Quote Originally Posted by Anvil View Post
    @alfaunits
    Whether the random data generator is producing 10MB/s or 100MB/s has nothing to do with writing randomly or not.
    If you are doing it in the same thread as the actual writing it does, because the writing needs to wait on the generator.

    The Areca can sustain those speeds until the cache is filled or as long as the array can sustain that speed, I was using a small array incapable of coping with that speed and thus it wasn't sustained. (it lasted for a few seconds)
    The easy test is just to generate the random data without writing the data, that would tell the potential of the random "generator".
    I did not even take Areca into consideration here, but just the bare CPU, memory and PCI-e link. If you do it in the same thread and you can send 1.5GB/s to Areca (does not matter what Areca does with it, it can ditch them even), that means the combined speed of generator and PCI-e link is over 6GB/s total (3GB/s each on average, so per second they can write at half of the average = 1.5GB/s). You have to agree that's... too much?
    Are you generating the random bits in the same thread that does the writes, or is there a separate thread for them in the background? (I suggested a back thread already, you said you have no time, so I presume it's all done in the same thread).
    I know you have "grr" mind reading my posts, but I am not trying to play smart - if the above numbers and thread assumption are correct, something is too fishy and the entire test is skewed.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  4. #229
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Why are you always making assumptions, loaded with negativity?
    (that's how I'm reading most of them unfortunately, I may be alone in making this conclusion, I don't know?)

    To answer some of your "questions"
    1.5GB/ was using threads
    The Endurance test is not using multiple "threads"
    The random data is currently being pregenerated using 3 buffers...

    Now, can we continoue this thread, which is about Endurance testing, not about making assumptions
    -
    Hardware:

  5. #230
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,796
    Because I question illogical things. I want to know if you're doing a useful thing here or just wasting several SSDs with sequential transfers. And I know I'm not alone in that questioning.
    GNU RNGs can barely do tens of MB/s, hardware RNGs don't do >500MB/s, so yes I am quite suspicious of the claim that without threads you get RNG performance that is fast enough compared to SSD speed to think it does not affect the overall "write" speed, i.e. not making this just sequential transfers overall.

    To make a test, you have to assume it's valid. This looks invalid. But it's your money dumped on an SSD. If you do reach close to 200TB for the 40GB X25-M you'll just prove me right here

    And if I am completely wrong, my sincere apologies. I don't pick random fights with people who have something to learn from, and you are one of them. Please consider it constructive criticism or my learnign curve.
    Last edited by alfaunits; 05-26-2011 at 10:40 PM.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  6. #231
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I don't think you've been reading this thread.

    The random generator test I did on the Areca wasn't being generated real-time, I never claimed it was, it's pregenerated.
    The test on the Areca has nothing to do with this test, it was in a completely different manner but still using the same pregenerated formula.

    If you are a programmer, write yourself a simple function that looks like this.

    declare a buffer of 4MB and fill the buffer using

    for I = low to high do
    buffer[I] = random(255)

    Write the buffer to a file, it will be incompressible.

    The data written to the Intels are just filled with "garbage" from the stack, they don't do compression so why spend cpu power on them.
    The speeds so far at writing is at the staggering rate of 30-50MB/s.
    This is so simple that I can't believe you are questioning the test, it could have been done using a simple batch file just copying files and getting the same results.

    Well, it's all there, it can be monitored using any tool.
    -
    Hardware:

  7. #232
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    29.2TB Host writes

    Media wear out 83

    29_20_tb_host_writes.PNG
    -
    Hardware:

  8. #233
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Delta between most-worn and least-worn Flash blocks: 10
    Approximate SSD life Remaining: 93
    Number of bytes written to SSD: 24,704 GB

    Later I will remove the static data, let the app run for a bit and then I will put the static data back on. That should ensure that blocks are rotated.

  9. #234
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    The mother of all TRIMs. 20 seconds. That is based on no static data running Anvils app with only 1GB free....so it's a delete for the full span of the drive.

    Now I will put back the static data and let it run normally. Hopefully this will help slow down the delta between most-worn and least-worn flash blocks.
    Attached Images Attached Images

  10. #235
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,796
    Sorry then. I was under the impression all of them used random data, which is why I disliked the outlook. I did notice 0Fill for your Intel test, just forgot.

    Quote Originally Posted by Anvil View Post
    I don't think you've been reading this thread.

    The random generator test I did on the Areca wasn't being generated real-time, I never claimed it was, it's pregenerated.
    The test on the Areca has nothing to do with this test, it was in a completely different manner but still using the same pregenerated formula.

    If you are a programmer, write yourself a simple function that looks like this.

    declare a buffer of 4MB and fill the buffer using

    for I = low to high do
    buffer[I] = random(255)

    Write the buffer to a file, it will be incompressible.

    The data written to the Intels are just filled with "garbage" from the stack, they don't do compression so why spend cpu power on them.
    The speeds so far at writing is at the staggering rate of 30-50MB/s.
    This is so simple that I can't believe you are questioning the test, it could have been done using a simple batch file just copying files and getting the same results.

    Well, it's all there, it can be monitored using any tool.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  11. #236
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    I had to guess a few of the inputs (especially for one_hertz) but even if you take out the highly compressed data the SF drive worked with for the first ~5TB it is still doing really well so far, especially considering data is now uncompressible with no let up for static data rotation.

    MWI = wear out/
    R/Sect = relocated sectors
    Attached Images Attached Images

  12. #237
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Here are a couple of charts that show the impact of throttling based on file size and level of compressibility of data.

    Obviously this is worst case thottled state, but both sequential read and write speeds are hit quite hard with uncompressible data. Highly compressible data on the other hand is unaffected.
    Attached Images Attached Images

  13. #238
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Those are very nice charts Ao1

    The SF compression chart is just like I figured it would be , quite interesting as the drive is being throttled as well.
    -
    Hardware:

  14. #239
    SLC One_Hertz's Avatar
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,953
    80%, 2 reallocated sectors, 37.5TB. Switching to the new software.

  15. #240
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Intel have posted a link to a very interesting white paper:

    http://www.usenix.org/event/hotstora...pers/Mohan.pdf

    It talks about the impact of the frequency of writes.

    Longer recovery periods between writes can significantly boost endurance, allowing blocks to potentially undergo several millions of P/E cycles before reaching the endurance limit.

    (Its not going to happen in our test)
    Last edited by Ao1; 05-27-2011 at 06:41 AM.

  16. #241
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by One_Hertz View Post
    80%, 2 reallocated sectors, 37.5TB. Switching to the new software.
    We didn't agree on the random part, it's set to 1000ms per loop by default.

    1000ms = 20-30MB? on the Intel's, is that something we can agree on or do we wan't more?

    --

    I've finally reached 30TB +

    30.18TB Host writes
    Media Wear out 82
    Re-Allocations still at 4

    @Ao1
    Interesting link, will do some reading.
    -
    Hardware:

  17. #242
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Here is some more info

    P/E specs are based on the minimum.

    http://www.jedec.org/sites/default/f...JESD47H-01.pdf

  18. #243
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Delta between most-worn and least-worn Flash blocks: 11
    Approximate SSD life Remaining: 92%
    Number of bytes written to SSD: 26,432 GB

  19. #244
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    Really nice to see Intel g3 with 25nm nand beat g2 with 34nm nand! !

  20. #245
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    Delta between most-worn and least-worn Flash blocks: 12
    Approximate SSD life Remaining: 91%
    Number of bytes written to SSD: 28,352 GB

    Edit:

    Another interesting snipet from the Intel forum:

    "Read disturb refers to the property of Nand that the more a block of Nand is read, the more errors are introduced. A brief note about Read Disturb (and other various Nand properties) are discussed in this technical note from Micron:

    http://download.micron.com/pdf/techn...and/tn2917.pdf


    Static data that is read frequently will eventually need to be refreshed or relocated before it reaches the Read Disturb limit because of this Read Disturb property."
    Last edited by Ao1; 05-28-2011 at 12:36 AM.

  21. #246
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    A lot of good reading in all those links!

    32.02TB host writes (just seconds ago)
    MWI 81

    nothing else has changed.

    32_02_tb_hostwrites.PNG
    -
    Hardware:

  22. #247
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    I had to reboot to apply Window updates and now it seems that DuraClass has really kicked in on uncompressible data.

    I tried a secure erase from the OCZ toolbox, but it has not helped. When I tried to copy the static data back (mp3's) I could only get around 10MB/s. (Ended up at 8.55MB/s)

    The endurance app is currently running at 4.13MB/s

    Dang, looks like its game over for me unless I can get the MB's back up.
    Attached Images Attached Images

  23. #248
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'd let it idle a while. (a few hours)

    Pretty strange that it didn't slow down until the reboot?

    I also had a reboot today (had been running for 12days ++ without rebooting) and the speed picked up the first initial loops, not much, will check a bit later.
    -
    Hardware:

  24. #249
    Xtreme Mentor Ao1's Avatar
    Join Date
    Feb 2009
    Posts
    2,597
    It might have occured when the app stopped running or it might have been the reboot. Currently the app is running at 6.52MB/s.

    I'm going to do another secure erase, but then I will only run the app with no static data.

    If that does not work I will leave it on idle.

  25. #250
    Moderator Anvil's Avatar
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Sounds like an idea.

    If it needs idling it could take days, interesting turn anyways. If full throttling means ~10MB/s it's pretty much "disabled".
    -
    Hardware:

Page 10 of 220 FirstFirst ... 789101112132060110 ... LastLast

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •