Page 2 of 4 FirstFirst 1234 LastLast
Results 26 to 50 of 96

Thread: How big is Host Writes on your SSD?

  1. #26
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Anvil View Post
    So far that X25-V has written close to 1PB, I'll post a link later today.
    How is that possible? With flawless wear leveling that would imply that all the cells were rewritten 25,000 times, which is well above spec for cheap MLC flash.

    I always wanted to do a test on an X25-M. Throw it on iometer and give it a mixed sequential/random different block size write test. Record when it fails.

  2. #27
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I don't know

    Heres one of the links, there is another page thats more recent but the link is on my laptop.
    (on my IPad now)
    -
    Hardware:

  3. #28
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Here is the last link (you need to translate somehow )

    Each cell is written 24641 times according to the text on that page. (haven't checked though)
    904600.41GB in 7224 hours would be ~125GB/h or ~35MB/s. (sounds high on average)

    20110228-02.jpg
    -
    Hardware:

  4. #29
    I am Xtreme
    Join Date
    Oct 2005
    Location
    Grande Prairie, AB, CAN
    Posts
    6,140
    That is just ridiculous. How in the eff is that SSD still going?

  5. #30
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    There is no way that is legit. If it was, Intel would be singing praises about the longevity of their SSDs instead of claiming they can withstand just 15TB of random writes.

  6. #31
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    10 ~ 100KB 80% (System thumbnails and the like)
    100 ~ 500KB accounted for 10% (JPG images and the like)
    5% 1 ~ 5MB (big picture, MP3 and the like)
    5% 5 ~ 45MB (video clips and the like)
    Perhaps this is why. Not sure if % are based on the number of operations or the volume of data, though.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  7. #32
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    15TB of random writes.
    full span writes. which write @ much slower speeds, and since there is no free space on the drive, there is not as much power for wear leveling, write combing, etc. it is a WORST case scenario. a random write on a 100 percent full drive cannot function 'normally'.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  8. #33
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    (7500000000000 / (300 * 4096)) / 60) / 60 = ?
    (7500000000000 / 1228800) / 3600 = ?
    (6103515.625 / 3600) = 1695.42 Hours

    OR 70.64 days of completely random IOs. (full span write ONLY)

    Off by a little before cause I used 4k vs 4096. The specification has random write I/Os at 300/s using a full span.
    The intel addendum lists their measured IOPs at 300 for 4k full span QD=32 random.
    When you do limited span IO testing (say 8gb testfile) you get increased efficiency of coalescing and caching mechanisms resulting in increased performance. This is why the iops for smaller datasizes is much more, to the tune of 30 mb/s if i recall correctly.
    If you are getting >10x the write speed then you are also getting at least 10x the coalescing. If they are getting 10x the coalescing then they are getting 10x the endurance.
    An increase in spare area from 7% to 17% provides over a 2x increase in write endurance. This is directly from Intel. 10% spare area = 280% improvement in write endurance under most circumstances.
    Think if he is using the drive basically empty.

    EDIT: also you must realize it is Host writes. after write combining and the effects of NCQ from the OS there can be a tremendous difference between host and OS writes.
    Last edited by Computurd; 05-10-2011 at 06:29 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  9. #34
    Xtreme Monster
    Join Date
    May 2006
    Location
    United Kingdom
    Posts
    2,182
    Quote Originally Posted by Computurd View Post
    Think if he is using the drive basically empty.
    Most likely. One of my projects will be done in a month or so then I will switch to CEP2. I really want to see how much this SSD can withstand.

  10. #35
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    File size/ sequential vs random make a huge difference to the burn out rate. If you want to calculate a particular load and project how long the SSD will last based on that particular load Intel explain how to do it. See post 7, which is explained a bit more in one of the IDF 2011 papers: Solid-State Drives (SSD) in the Enterprise: Myths and Realities - Session SSDS003
    https://intel.wingateweb.com/bj11/sc...talog.jsp?sy=0

  11. #36
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Computurd View Post
    (7500000000000 / (300 * 4096)) / 60) / 60 = ?
    (7500000000000 / 1228800) / 3600 = ?
    (6103515.625 / 3600) = 1695.42 Hours

    OR 70.64 days of completely random IOs. (full span write ONLY)

    Off by a little before cause I used 4k vs 4096. The specification has random write I/Os at 300/s using a full span.
    The intel addendum lists their measured IOPs at 300 for 4k full span QD=32 random.
    When you do limited span IO testing (say 8gb testfile) you get increased efficiency of coalescing and caching mechanisms resulting in increased performance. This is why the iops for smaller datasizes is much more, to the tune of 30 mb/s if i recall correctly.
    If you are getting >10x the write speed then you are also getting at least 10x the coalescing. If they are getting 10x the coalescing then they are getting 10x the endurance.
    An increase in spare area from 7% to 17% provides over a 2x increase in write endurance. This is directly from Intel. 10% spare area = 280% improvement in write endurance under most circumstances.
    Think if he is using the drive basically empty.

    EDIT: also you must realize it is Host writes. after write combining and the effects of NCQ from the OS there can be a tremendous difference between host and OS writes.
    Yes, that explains part of it, but not to the extent of writing 1PB to an X25-v. Here Intel claims the 160GB X25-M can withstand 150TB if only 96GB is used and the rest is left for overprovisioning (8.3k 8KB random write IOPS in this config or 66MB/s):
    http://cache-www.intel.com/cd/00/00/...555_459555.pdf

    X25-V has 4x less NAND so we are looking at 37.5TB. That asian site is claiming to have written 25x that amount already and that the SSD is still 97% healthy. There is no way. I am near willing to buy and kill an x25-v to see for myself.

  12. #37
    SSD faster than your HDD
    Join Date
    Feb 2005
    Location
    Kalamazoo, MI
    Posts
    2,627
    What if the SMART data is wrong about the life?

  13. #38
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by One_Hertz View Post
    Yes, that explains part of it, but not to the extent of writing 1PB to an X25-v. Here Intel claims the 160GB X25-M can withstand 150TB if only 96GB is used and the rest is left for overprovisioning (8.3k 8KB random write IOPS in this config or 66MB/s):
    http://cache-www.intel.com/cd/00/00/...555_459555.pdf

    X25-V has 4x less NAND so we are looking at 37.5TB. That asian site is claiming to have written 25x that amount already and that the SSD is still 97% healthy. There is no way. I am near willing to buy and kill an x25-v to see for myself.
    The drive might be healthy, but the wear out indicator is on 1. No relocated sectors though, which seems a bit strange. I read somewhere from Intel that even when the indicator drops to 1 you will still be able to carry on writing for some time.

    The test methodology is hard to follow when translated. It seems that most writes were sequential and consisted of relatively large xfer sizes. It also seems they stopped the test every 12 hours to run the toolbox.

    One of the other links Anvil provided seems to look at the impact of endurance without TRIM. It's too hard for me to follow what they are saying, but it would interesting to know.

  14. #39
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by RyderOCZ View Post
    What if the SMART data is wrong about the life?
    I guess it can only provide an indication, but I would imagine Intel did a lot of research both on understanding typical workloads and then testing the NAND until it burnt out under those loads before they put the product out to market.

    For sure Intel are very robust in their endurance claims.

  15. #40
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    There are AA (170) retired blocks on that thing and is has been increasing a lot lately.

    A pity it would take a year to reproduce, unless it failes within a few months.

    I dont think theres anything wrong with this test, it's just not clear what's been done, the only thing we "know" is that host writes are 2-5 times higher than one can expect from 34nm NAND.

    Writing 36.5"TB" (20GBx365days*5years) should be doable in a couple of weeks
    Last edited by Anvil; 05-11-2011 at 07:06 AM.
    -
    Hardware:

  16. #41
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    http://www.pceva.com.cn/html/2011/st...319/229_2.html
    Here is a bit of info on Sandforce performance throttling to kerb P-E cycles. In option E you can factory set the throttling. I wonder which option reviewers get.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	2.png 
Views:	656 
Size:	193.8 KB 
ID:	114276   Click image for larger version. 

Name:	1.jpg 
Views:	618 
Size:	105.4 KB 
ID:	114277  

  17. #42
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by Ao1 View Post
    For sure Intel are very robust in their endurance claims.
    Hope so!
    But after their P67-B2 fiasco I wouldn't blindly trust them.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  18. #43
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    ere Intel claims the 160GB X25-M can withstand 150TB if only 96GB is used and the rest is left for overprovisioning
    again, all of intels endurance testing specs are with full span. no write combining, wear leveling or GC. absolute worst case scenario.

    That asian site is claiming to have written 25x that amount already and that the SSD is still 97% healthy
    yup. 25x the amount of a full span test. should be easy actually if you are benefitting from teh write combining and the wear leveling and the GC. and if they're doing the toolbox every 12 hours, walk in the park.

    see, lets think here. what would write amplification be without ANY write combining? do you think it would be 10 times worse? or 100 times worse? whats your thoughts on that?
    QOTD: What percentage would wear amplification be if there was no benefit of write combining?

    you can get 280 percent endurance increase just from OP. even more is possible. and that is measured, again, with full span writes.

    dont trust the smart data. im sure the wear is much worse than that.


    EDIT: if the intels only took that much writing, that quickly, they would be dying in droves by now. I am on tons of storage forums, and nowhere have i seen anyone pop up and say "ive written my drive to death" for any type of SSD, let alone the Intels.
    Last edited by Computurd; 05-11-2011 at 03:28 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  19. #44
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @Ao1
    Lots of info on that site

    @CT
    If you look at the CDI info.
    E9 = Media Wearout Indicator, and it's 1, they start off at 100 and so I believe that Ryder (OCZ) could be right, the 97% Health status can't be trusted in this case.
    (SMART isn't quite there yet)
    -
    Hardware:

  20. #45
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    yea smart is pretty lame tbh. i have yet to see of one predicting correctly when the ssd will be written to death. which would be hard to do anyways, i havent seen any *yet*
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  21. #46
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Computurd View Post
    again, all of intels endurance testing specs are with full span. no write combining, wear leveling or GC. absolute worst case scenario.
    Did you not read the link? They use 96GB out of 160GB. 64GB is available to do all the write combining SSD needs to do. That test is not full span. Full span implies the full 160GB is used.

    You don't seem to understand why what they are claiming is impossible. Even if what they are doing is absolutely the easiest workload imaginable for the Intel SSD (it isn't) and the write amplification is 1 (can't get below 1, unless compression is used which Intel does not do), then they claim to have overwritten each cell 25,000 times. This is 5 times more than the spec for 34nm MLC flash. No matter how magical you believe the controller to be, it can not do anything to increase the raw life of the NAND which is being used. The absolute best the controller can do is not waste any write cycles on unnecessary operations.

    If the flash cells could withstand 25,000 writes and still operate, the manufacturers sure as hell would not be marketing it at 5k.

  22. #47
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    They use 96GB out of 160GB. 64GB is available to do all the write combining SSD needs to do. That test is not full span. Full span implies the full 160GB is used.
    well yes, when they are testing the benefits of OP. maybe i did word that incorrectly, but regardless, pointing out a minor flaw in a large post that lists many facts is not going to dissuade me from the fact that your reasoning is flawed.


    where are all the mass die-offs?
    how many guys have quite literally obliterated vertex gen 1's with thousands of pointless benchmarks, not to mention daily usage that is extreme, and their devices are fine, A-OK. I know i have
    where are these people with all of these dead drives?

    have you considered the fact that manufacturers are intentionally setting the bar very low, as they quite simply dont know for sure?

    at the end of the day, endurance testing is for one thing: showing the absolute worst case scenario. we all know how drives that are at 100 percent capacity perform like crap. the reason they do is because they are essentially not able to perform their 'housekeeping' correctly. it is the worst case scneario for any device, but in practical application there just isnt any real world data that would point to a empty disk degrading to a point of uselessness as quick as a disk that is absolutely 100 percent full.

    there isnt any mass die-offs or mass complaints to evidence the fact that these drives can only handle that much random writing in normal usage scenarios. we hang out on forums man. on most forums that is where people come to complain and whine. i swear some forums out there are just a bunch of people ing about products incessantly. every little thing.

    believe me we would be the first to know if these isssues were out there.
    Last edited by Computurd; 05-11-2011 at 05:28 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  23. #48
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Who said there is an issue? Most people are writing sub 10TB to their SSDs... there is no problem if the limit is 30-40TB (and much larger for larger SSDs). 30-40TB is A LOT of benchmarking. More than what anyone does. I was merely saying that an x25-v can not withstand 1PB of writes. It is impossible. Nothing more nothing less.

  24. #49
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by zalbard View Post
    Hope so!
    But after their P67-B2 fiasco I wouldn't blindly trust them.
    Actually, Intel's handling of their mistake makes me trust Intel more. That sort of problem probably would not have been discovered by consumers or reviewers for a long time -- it probably would have been months before the SATA ports started failing. But Intel discovered their own mistake, and voluntarily recalled all the bad parts. Obviously it would have been better if the mistake was not made in the first place, but given that the mistake was made, I cannot see how Intel could have handled it any better.

  25. #50
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by One_Hertz View Post
    If the flash cells could withstand 25,000 writes and still operate, the manufacturers sure as hell would not be marketing it at 5k.
    I've never seen the definition for the 5K specification. Anyone know the details?

    For example, if the lifetime is 5K P/E cycles, what does that mean statistically? Perhaps they test 1TB worth of cells, and after 5K cycles, 99.99% of the cells are okay? Or maybe at double the cycles (10K), 90% are still going? There must be some statistical definition for the 5K cycles spec. I would assume the probability of failure after 5K cycles would be very low, but if that is the case, it may be that 95% of the cells may last much longer than 5K. If there are enough reserve cells in the SSD, then you could replace 5% and keep going.

    So I am surprised if a 34nm flash SSD has written enough for 25K P/E cycles. But I would not dismiss it as impossible. It really depends on the details of that 5K specification.

Page 2 of 4 FirstFirst 1234 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •