Page 15 of 24 FirstFirst ... 512131415161718 ... LastLast
Results 351 to 375 of 598

Thread: Sandforce Life Time Throttling

  1. #351
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Vapor View Post
    Bummer that E6 disappeared, was really hoping to see that tick back upward with idle time working.
    Same here.

    I just started up Anvils app to check MB/s. Screen shot below. My drive has a 1 year LTT AFAIK.

    To further elaborate what happened after the f/w update. The Power on Count was reset to 1. At first the E6 life curve appeared as being 100. Cruel blow, as minutes later it changed back to 90. The drive is still throttled.

    It doesn't look like idle time releases the throttling in the same way as the V2. Will know better tomorrow.


    Click image for larger version. 

Name:	AA.png 
Views:	190 
Size:	24.1 KB 
ID:	118301

    Click image for larger version. 

Name:	Untitled.png 
Views:	191 
Size:	68.0 KB 
ID:	118302

  2. #352
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Did x09's raw values always have non-zero data a few bytes up from the 195? Is it possible E6 has just been completely depleted?

  3. #353
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Vapor View Post
    Did x09's raw values always have non-zero data a few bytes up from the 195?
    Here are couple of typical entries from post # 327
    52 203 166 0 0 0 166
    47 116 166 0 0 0 166

    Quote Originally Posted by Vapor View Post
    Is it possible E6 has just been completely depleted?
    Coudl be. After 5 hours of idle the drive is still 100% throttled. That would not have been the case for the V2.
    E6 had changed to 100, but as soon as I started running AS SSD it reverted to back to 90.
    I'll leave it on idle overnight. If it's still throttled in the morning something is definitely not right.

  4. #354
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    It could also be possible you're way under the lifetime slope, in a bad way. 16-17MiB/sec is way too fast for 64GB, 3000-5000 P/E cycles, and integer years of lifetime. Lifetime setting could be very short though...hmmm.

    I guess time will tell

  5. #355
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    FW 2.11 looks to be the way to go but not if it makes the drive unusable, lets hope it's gets sorted out.
    -
    Hardware:

  6. #356
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    I just looked at all compression data and is a little fuzzy to me. For assuring some redundancy, the controller is writing additional data for each group of programmed pages and I always assumed that this kind of activity is also registered as flash writes. But from the test with incompressible data, (only 2% more when writing movies) it seems to me it is not registering everything. Also, the zero fill and 8% compressions are quite interesting. From their difference from 14.6% to 17% I would say that probably any file with a compression ratio below 7-8% will compress to same 14.6% (compression range for real life databases)

    For me, it is clear that somehow the controller needs to cluster data together to be able to compress it but how is it doing that, its quite strange. Zero fill test should compress multiple pages in one as a best case scenario, so I would expect to see host writes difference as a multiple of flash writes and if one is divisible to 8, so it should be the other one (page size). In the earlier test, for 13 flash write increments there were 89 host writes and these values are not matching in any way on what I would expect.

  7. #357
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    I'm not sure parity writes are included in the NAND writes statistics, tbh.

    And to figure it out, I think we need a longer/better test of incompressible data before citing 'real WA' as 1.02x. Though the SF-2200 would have to be very good to compress movies at all, the verdict is still out on 'real WA', partially due to a small sample size of just 54.5GB, IMO (not sure the best way to test this... probably 100s of GB of 101% from Anvil's App like the other compression settings).

    As for how the SF-2200 handles compression, whatever the 'real WA' is of the SF-2200 actually would presumably need to be factored out of compression figures from Ao1's testing if you wanted to isolate the compression ability of the SF-2200. And regarding the 0-fill being 14.6% and 8% being 17%, There probably is some clustering causing inefficiency with 0-fill (with limited deduplication on top of that, I suspect), but I'm not sure how much of a concern that should be, both usage scenarios are pretty/very rare (especially 0-fill) and the compression is clearly effective on less compressible data sets.

  8. #358
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    My takeaway from the data is that about the only "real" data that Sandforce can compress significantly is Windows installations and application programs. It did little or nothing with documents, videos, pictures, or audio files. And even on the Windows and applications, the compression ratio was only 75% when measured by copying the folders over. The installation measurement found 50%, but I wonder if that is an accurate measurement (due to the small size of data written), or maybe the installation included writes of empty page files or empty hibernation files that were not included in the "Copy Windows and Program Folder".

  9. #359
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    The V3 has had at least 9 hours of idle time but it is still throttled. The main differences in throttling between the V2 & V3 are:

    • The write speeds with the V2 were throttled to the same degree regardless of xfer size. With the V3 throttling varies depending on xfer size and is less severe for sequential xfers.
    • With the V2 even a small amount of idle time was enough for throttling to be released until excessive writes reoccurred. With the V3 the drive has (so far) remained throttled.

    With regards to compression testing the problem with testing more data is that once all the NAND has been written to WA kicks in and distorts the figure. That is why I SE'd before each run and tried to keep data xfers within the capacity of the drive. To test a wider range sample I would need a much larger drive. That said I doubt the results would change.

    The take away from the testing undertaken by Jeffrey Layton in post #280 was "remember that you're not studying the compressibility of the data file as a whole but rather the chunks of data that a SandForce controller SSD would encounter. So think small chunks of data."

    For typical client applications it's hard to see how compression works to achieve anything close to the "up to" specified speeds. The only way of telling is to monitor performance of real applications. I might just get another V3 to do this.

    With regards to the power on count the hours seem to be too high now. Currently it is reporting 217 hours. 11 hours after post ~351 and power on hours have gone from 195 to 217 = 22 hours. Exactly double.

    Click image for larger version. 

Name:	as-ssd-bench OCZ-VERTEX3 29.07.2011 06-01-47.png 
Views:	173 
Size:	37.0 KB 
ID:	118329

    Click image for larger version. 

Name:	Untitled.png 
Views:	169 
Size:	70.3 KB 
ID:	118330
    Last edited by Ao1; 07-28-2011 at 09:25 PM.

  10. #360
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    With regards to compression testing the problem with testing more data is that once all the NAND has been written to WA kicks in and distorts the figure.
    Is that an assumption, or did you determine it through testing?

    I think that write amplification should have little effect when you are doing large sequential writes, followed by deleting the large files, TRIM, and then more large sequential writes. I'd be surprised if there was any write amplification at all in such a circumstance.

    So, all you need to do is create an archive (like RAR), with no compression, of all the files you want to test. Then you have one large file that can be used to test the compression via a large sequential write. Copy that file to the SSD (multiple times if it will fit), then delete the file and empty the trash (TRIM), then do it again. Repeat until you have enough data.

  11. #361
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    WA would be minimal for sequential writes, but it can't be excluded/ isolated from compression. Anvils app is generating small xfers that will incure an element of WA. My concern therefore was that more data would most likely be less accurate once WA became applicable.

    I have a few ideas I'm working on, which will involve real files at various sizes, which I will test incrementally.

  12. #362
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    WA would be minimal for sequential writes, but it can't be excluded/ isolated from compression. Anvils app is generating small xfers that will incure an element of WA. My concern therefore was that more data would most likely be less accurate once WA became applicable.
    Huh? What does Anvil's app have to do with anything? Write amplification would be virtually non-existent and CAN be isolated from compression using the technique that I just outlined.

  13. #363
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    ~ 6 hours later and the power on count has gone from 217 to 230. The SF is working on double time. The good news is the drive is no longer throttled. Now I will see how much I can write before it becomes throttled again.

    (John, I was refering to the testing I was doing with Anvils app r.e. WA).

    Click image for larger version. 

Name:	Untitled.png 
Views:	181 
Size:	17.1 KB 
ID:	118339

    Click image for larger version. 

Name:	as-ssd-bench OCZ-VERTEX3 29.07.2011 12-31-18.png 
Views:	180 
Size:	37.1 KB 
ID:	118340

  14. #364
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    It took less than 1 1/2 hours to throttle again running Anvils app. (259GB) I wasn't around for the last hour to see exactly when it happened.

    There is a problem now however due to the way that power on hours are being reported. Do you believe the SMART value for a LTT calc or actual hours? I don't think I can do much more with this particular drive with regards to LTT.

  15. #365
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Waiting 24hrs now that it hit the throttle state?

  16. #366
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    407
    Yes. I think the important question is how much data can be written to the drive per day before LTT kicks in. The only question I have is how to determine that accurately. It is made especially difficult if it takes longer to throttle with sequential transfers. That seems to necessitate at least two independent series of tests. One for random writes and another for sequential. Maybe yet another for a 50/50 mix. Obviously the easiest test is the one that has been discussed. Based on the idea of a linear life curve which the firmware tries to stay below so that the longer you wait the more you can write. But is this anything more than a (very plausible) assumption?

    It would be nice if we had a modified version of Anvil's app which monitors the E6 attribute and only writes when the drive is in an unthrottled state. Then you could simply monitor how much data can be transferred per day in unthrottled mode. You could even easily compare Sandforce drives from different manufacturers this way.

    Even without E6 maybe a program could be written that monitors its own transfer rate and when it drops below a certain level it stops writing and goes into wait/test mode, periodically making small transfers to test if LTT is still engaged. When LTT is finally lifted it logs the time and starts writing again until LTT engages again. Every 24 hours it could log the total amount written and the number of LTT engagements.

    The problem with doing it manually is the granularity issue. How often can you really check to see if the LTT has been lifted? Once an hour? If you assume consistent LTT=FALSE and LTT=TRUE time periods then you could start to narrow it down so that you know almost exactly when you would need to check for the LTT state change. Lets say the LTT=FALSE period is 90 minutes and the LTT=TRUE period is 9 hours so that you could write for 90 minutes, wait 9 hours and then write for another 90 minutes. You would just have to check Anvil's app at t=90 min and then again at t=10.5 hours and so on, until you observed a consistent pattern over at least a few cycles. If the linear life curve model is correct then the data written per day should be the same as just waiting 24 hours between write sessions.

  17. #367
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Weird. E6 seems to now be reporting as pre 2.11. Power on hours are current 252. Still running at double time. (2 power on hours = 1 real hour).

    E6 100
    E6 raw values
    0 2 84 0 0 0 100 (1byte)

  18. #368
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    E6 100
    E6 raw values
    0 4 107 0 0 0 100 (1byte)
    Power on hours = 267

  19. #369
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Vapor View Post
    Waiting 24hrs now that it hit the throttle state?
    It's going to be tricky due the ongoing discrepancy between SF's current temporal counting of power on hours and normal time.

    I will test both. Normal time first and then 24 SF power on hours. I need to wait a bit before starting.
    Last edited by Ao1; 07-30-2011 at 04:52 AM. Reason: removed graph

  20. #370
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Maybe this "fast" time counting is to compensate the fact that average users are keeping their computers running for 8-12 hours a day. It's very important for users to be able to write at least X GiB of data per "usable" day and what is more important, the user should not be impacted by the needed idle time. The controller could even have a wild guess on how much should be the acceleration factor by looking at real power on hours count, total cycles of power on/off and total unexpected power failures.
    What Sandforce based SSDs desperately need is a way to count power down time between power on cycles. This would probably achievable up to some point with some capacitors and an efficient counter

  21. #371
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    I think the fact that the warranty throttling is undocumented makes it easier for Sandforce and OCZ to sell SSDs with buggy firmware. After all, if no one knows what is supposed to trigger the throttling, then no one can claim it is not working properly. OCZ blames it on Sandforce being secretive, which is absurd since Sandforce has patented the technique (which isn't particularly innovative anyway) and so the fundamentals can be known to anyone simply by looking up the patent.

    All that is needed is for OCZ to document the specifics for each SSD model: what triggers the throttling, how slow is the throttled performance, and what needs to happen for throttling to be released. This seems like an obvious thing for the SSD datasheet, since it directly impacts the functioning of the SSD and does not require disclosing any of Sandforce's trade secrets. But OCZ's representative dubiously claims Sandforce is keeping it secret, and no doubt Sandforce would say that OCZ is keeping it secret. It is like Laurel and Hardy selling SSDs.

  22. #372
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I got the V3 on the 19th and its been powered up ever since.

    In theory it should have around 264 hours. With f/w 2.08 it seemed to miss hours and with 2.11 it seems to have caught up and it is now exceeding the theoretical maximum hours the drive could have been powered up for.

    Anyway, 24 hours of real time and the endurance app is up and running again. (46 hours in SF time)
    Last edited by Ao1; 07-30-2011 at 06:32 AM.

  23. #373
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    E6 is counting down as writes increase on a linear basis.

    0 5 204 0 0 0 100 37,484
    0 5 3 0 0 0 100 37,700

    04 253 0 0 0 100 37,717
    04 194 0 0 0 100 37,787

    It seems the first two digits countdown after the next digits run from 253 to 1 and count down by one for approx 1 GiB of data written.

    So in theory 5 x 253 GB before 00 000 = ~ 1.2TiB / ~ 6 hours

    Click image for larger version. 

Name:	Untitled.png 
Views:	129 
Size:	12.9 KB 
ID:	118388

  24. #374
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    In the absence of any better data I'm going to hang my hat on the figures below based on an average over the LTT warranty period and 3k PE cycles. On top of the initial credit it appears you also get credit days before throttling kicks in. i.e. you can get away with writing more than the max daily allowance as long as it then evens out over a period of days days. I'd guess no more than 7 days.

    This is different to what SF state, as they say the PE count will never go below the life time curve, but it clearly does.

    I'm hanging my hat on those figures based on the V2 1 year LTT and the ~7MB/s throttle speed, which gives around 220TiB on a MWI calculation. All those figures tie in well.

    • 0.60TiB per day with 1 year throttle/ 25.6GB per power on hour/ 7.28MB/s
    • 0.2TiB per day with 3 year throttle/ 8.53GB per power on hour/ 2.42MB/s
    • 0.12TiB per day with 5 year throttle/ 5.12GB per power on hour/ 1.45MB/s

    Ironically it looks like larger capacity drives (<100GB) will throttle more easily and more severely as they have 3 year LTT vs a 1 year throttle for drives >100GB.

  25. #375
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    An allowance of 1.2TiB per 24hrs of idle time works out to 438TiB per year or 7000 P/E cycles.

    With a +/- 10%, that's 1.08TiB-1.32TiB per 24hrs.

    An allowance of 1.08TiB per 24hrs of idle time works out to 6300 P/E, which is getting close to 1/2yr lifetime setting on 3000 P/E cycle NAND.

    An allowance of 1.32TiB per 24hrs of idle time works out to 7700 P/E cycles, the exact same as 16MiB/sec throttle slope.

    Also of note, the first two bytes from E6 raw seem to be acting as if they were a 2-byte number, not two separate 1-byte numbers.

    Was 05 204 the starting value at the end of the 24hrs of idle? If so, then the 1.2TiB number isn't what is important, it's 05 204 = 05C7h or 1479d, or 1479GiB if it's 1:1.

    EDIT: that 7.3MiB/sec after WA looks appropriate for a 1yr, 3000 P/E cycle 40GB V2
    Last edited by Vapor; 07-30-2011 at 08:07 AM. Reason: edit

Page 15 of 24 FirstFirst ... 512131415161718 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •