Its running great. 64.23MB/s to be exact.
Anytime now E6 is going to drop to 90.......
Printable View
Its running great. 64.23MB/s to be exact.
Anytime now E6 is going to drop to 90.......
E6 100
E7 99%
E9 11,078 GB
E6 raw values
0 81 100 0 0 0 100 (1byte)
The relationship between E9 and the raw value of E6 is linear. (The graph from #299 remains the same).
AVG 64.53MB/s
SPECULATION
E6, rather than E7 is based on PE cycles. When E6 drops below a threshold write speeds will be throttled and E7 will eventually level out to E6.
Without throttling (and assuming E6 is PE) the theoretical PE cycle count would run out in 148 hours at its current rate.
So, if E6 is below E7, throttling is likely to kick in at some stage. If E6 is above E7 throttling is not going to be a problem.
Based on the ~35,000GB write capacity projected by E6 and a three year warranty (26,280 hours) you could write 1.33GB per power on hour without being throttled, which is well in excess of normal client use. On the other hand heavy writing with low power on hours could be problematic. 1.33GB assumes the SSD is powered on for the entire 26,280 hours.
I'm quite sure they are operating at another scale than the 24h clock, most likely 8h or less, anyways, it's not likely to be bitten by the LTT issue.
Some of the advice for the SF1 series controller that is given on forums indicates otherwise (the "let it idle advice") is given based on GC activity, lets hope that this isn't needed on the SF2 series drives.
So, in a few days we should know if it works like the V2 :)
As the MWI has dropped to 99% it would be reasonable to assume that the credit period has expired if it was based on PE cycles. The endurance app is however still happily plodding along. I'm going to guess that it will do so for the same 7 day duration that occurred with the V2. That would confirm that the credit period is based on time not PE cycles, BUT if it runs for 7 days (168 hours) at the current speed the theoretical PE cycle count will expire if E6 is correct. :shrug:
Time will tell :)
Will you check the compression ratio of 0fills, too?! ;)
Talking about compression.
Did anyone notice that the Indilinx Everest is "optimized" for compressed files?
Link to press release
I created a new thread for it
so another manufacturer jumps on compression! interesting....
I thought he already did :confused:
Other way around, I think...if it's optimized for compressed files then it's optimized for files SandForce 'bottoms out' on, i.e., incompressible (which sounds like every non-SF controller out there, equal performance for all manners of compressibility).
Sorry, missed that part. Thanks Ao1! :up:
Looks like more than 10%, but worse than what I would have thought. Do you think compression improved, or did just measurement improve?
E6 100
E7 98%
E9 13,269 GB
E6 raw values
0 74 100 0 0 0 100 (1byte)
AVG 64.68MB/s
Power on hours 56
Attachment 117950
oh i see. thanks. I misinterpreted that LOLQuote:
Other way around, I think...if it's optimized for compressed files then it's optimized for files SandForce 'bottoms out' on, i.e., incompressible (which sounds like every non-SF controller out there, equal performance for all manners of compressibility).
wonder if thats just shameless marketing or if there will be an appreciable boost in performance with that type of usage v a standard non-compressed SSD. i guess this actually makes it more interesting :P
E6 100
E7 97%
E9 16,602 GB
E6 raw values
0 63 100 0 0 0 100 (1byte)
The relationship between E9 and the raw value of E6 remains linear.
AVG 64.48MB/s
E6 100
E7 96%
E9 19,619 GB
E6 raw values
0 52 100 0 0 0 100 (1byte)
AVG 64.27MB/s
Power on hours 83 hours
Just coming up to the half way mark to 7 days, which (within an hour) is when the V2 started to throttle writes.
The relationship between E9 and the raw value of E6 remains linear. At this rate the E6 raw value will expire before 7 days.
Attachment 118013
E6 100
E7 95%
E9 23,103 GB
E6 raw values
0 40 100 0 0 0 100 (1byte)
AVG 64.27MB/s
Power on hours 98 hours
EDIT: To explain how I got some of the figures in the middle graph. The MWI dropped to 99% at 11,078GB. I averaged out subsequent drops in the MWI vs writes to establish a theoretical figure for 100%, which came to 8,592GB.
MWI at 95% and 23,103GB
- Exclude 8,592GB and 5% = 14,511GB
100% = 290,220 GB (+8,592GB)
The life curve is projecting 35,000GB
There is an approximate scale factor of 8.5 between the MWI projection and the Life Curve Projection. (Which is remarkably close to the write speed reduction factor that occurred on the V2 :) ).
Attachment 118041
Attachment 118038
E6 100
E7 94%
E9 27,748GB
E6 raw values
0 24 100 0 0 0 100 (1byte)
AVG 63.99MB/s
Power on hours 118 hours
EDIT: I've added a summary of the V3 in the first post. I will only update charts in the 1st post from now on.
Already past 30GiB, getting closer :)
E6 100
E7 92%
E9 33,322GB
E6 raw values
0 5 100 0 0 0 100 (1byte)
AVG 63.88MB/s
Power on hours 142 hours
E6 90
E7 92%
E9 35,097GB
E6 raw values
0 0 90 0 0 0 90 (1byte)
Power on hours 148 hours
Write speeds are dropping quickly.
How quickly?
148 hours is 6 days 4 hours and you have just made it past 35TiB.
I'm not sure if it is fully throttled yet. I just stopped the app to run AS SSD. Screenshow below.
When the raw 1byte E6 value got to the equivalent of -1 the "normal" value dropped to 90. As soon as that happened write speeds started to drop.
It seems that the 1 byte value of E6 was the threshold for throttling to kick in, rather than a fixed time duration.
The V3 could write faster than the V2 so it took less than 7 days. The faster you write the faster the raw 1byte E6 drops. Not just speed either, but the type of writes. 4K full span would have throttled the drive quicker than sequential.
AVG on the app is currently 21.27MB/s.
Attachment 118198
24 hours later running Anvils app. Now I will leave the drive to idle for 24hours.
Attachment 118219
wow! man that is big time throttling!
so... have you made any calculations as to how many GB per hour it took to throttle it yet?
^ I posted the Excel file in the first post that records everything I observed. (I'll tidy up the first post later)
• 35,097GB/ 149 hours = 235GB per hour
• Out of the 35,097GB, 8,592GB was within the credit zone.
• Avg amount of writes per drop in the E6 1byte raw value = 277GB
• Disparity factor between MWI projection and the Life Curve Projection = x 8.5
The drive is being left to idle for 24 hours. I will then use John's method to calculate how much data can be written per day.
I'm also monitoring E6 to see if it recovers will idle time. (Not sure if it will recover in 24 hours, but we will soon see.)
When this is done I will look much deeper into compression factors.
Kudos on your findings, great work!
Great work Ao1!
Will be interesting to see how long it takes for E6 to "recover" or if there are any other movements during the idle time.
Something strange to report. I stopped the endurance test and left the drive to idle. The idea was to let the drive idle for 24hours and then restart the app to see how much could be written before LTT kicked in again.
Once the drive was in idle mode I noticed that the power on hours were not changing from the 166 hours reported at the start. All other drives (including the V2) reported increased hours during this period. (Currently I have 4 SSD's connected to my system).
Whilst the reported hours did not change the raw value for power on hours did change. At the start of the idle period the raw value in 1 byte decimal it was:
52 203 166 0 0 0 166
The values in bold changed every minute. Currently this value is being shown as:
47 116 166 0 0 0 166
Weird. The drive has also remained in a throttled state! :mad: E6 has not changed either.
Another surprise is that it has transpired that the V2 40GB drive & the V3 60GB have been set with a 1 year LLT. That makes a huge difference and explains why the V2 was throttled down to 7MB/s.
The V2 was set to:
Life In Seconds: 31536000
Memory Wear Cycle Capability: 3000
For comparison a V2 LE 100GB was set to:
Memory Wear Cycle Capability: 10000
Life In Seconds: 94608000
At first glance it might seem strange that the V240GB was set to 1 year and the V2LE 100GB at three years, but the PE cycle capability holds the answer.
EDIT:
Now that I am writing to the V3 again it seem the power on hours are increasing.
Strange, are you sure that the drive isn't allowed to "power off" during idle time?
Dooh. :shakes: The PC is set to not sleep/ hibernate etc, but the hard disk power setting was set to 20mins. :rolleyes: So much for green computing.
I'm running compression tests currently using your app. (Currently working on 8% fill).
Will revert to the endurance test later.
:)
Working on the thread and I've found someone willing to host the download (TheSSDReview)
Drive SE'd before commencing. 8% fill 1GB test file. (Generates random xfers sizes).
25% fill comming up
Attachment 118252
Drive SE'd again. Avg write speed @ 25% fill = 18.17MB/s, so I already know that 25% is not being compressed very well as the write speed is being throttled. 8% fill was around 50MB/s.
This is going to take some time :mad:
Using the V2 for such a task would take time so you're better off waiting for the SF2 series drive.
How is that old V2 drive doing?
I'm running this on the V3 due to the 1GB SMART update reporting frequency. The 64GB update reporting frequency on the V2 makes it unsuitable. The throttled state will mean it takes time but it won't effect results.
Currently I use the V2 as a temp download folder.
Will you create a new thread to announce the beta release of your app?
Drive SE'd before starting
Attachment 118275
I put the power setting for the hard disk to turn the power off after 5 minutes. I have an X25-M that is idling. The power on count increased for X25-M, so either something unbeknown to me was accessing the X25-M and preventing it from going to sleep, or power on hours are still recorded when the drive is "powered off".
Kind of strange that the V3 has to be fully powered up to count LLT hours and presumably run GC background tasks. :shakes:
Personally, I'd prefer a disk obey power state requests from the OS :shrug:
Added NTFS compression and your SF-2200 compression numbers to my compression curves charts. I intend to do the same with the LTT-less SF-1200 when I get it :)
You obviously can't add my C Volume and D Volume data, so that's been made to a trendline so that it's bridged between NTFS and RAR-fastest. NTFS was incapable of compressing 67%, 67% No Dedup, and 101%; file size didn't even change one byte.
Based on 8% and 25% settings, it looks like the SF-2200 is actually more powerful than NTFS compression :up:
Attachment 118271
Great graph. 25% fill is now complete and 46% is fill coming up. :up:
Regarding the power state I believe that the power setting puts the hard drive into a lower power state as opposed to no power at all. If that is correct it seems strange that SF does not record that the drive is powered on in context of LLT. In a low power state it is still reporting changes to SMART attributes, so why not record power on hours to help alleviate the impact of throttling?
GC could also benefit from a low power idle mode, assuming there is enough power for this to take place.
While such a state makes sense with a hard drive (spin down the platters), it is less clear with an SSD. I do not remember reading of any low power states with SSDs, and I would be suprised if all SSDs implement a low power state. I suppose it could underclock its controller, or possibly cycle the power to the controller off (and back on periodically). But the gains of such a state would not be great. On SSDs with a large DRAM cache, I suppose there could be moderate savings by clearing and not refreshing the DRAM. But the Intel controllers and Sandforce controllers do not have large DRAM caches. So I think the savings from a low power state would be minimal. May as well just have on and off states.
Agreed on the timers, I'd like for the power on counter to move while in low power, or at least some internal timer to move for LTT....if making a drive last X amount of time is the goal and the drive knows time is passing (i.e., it has some power), it should acknowledge and behave like time is passing.
Looks like 46% is landing right between the C Volume and D Volume curves so far. And SF-2200 being more effective than NTFS is still in effect too :up:
46% complete now as above. 67% in progress. I ran a quick copy of the Windows and Program folders. The V3 was able to compress a Win 7 & Office install by 50%. Transferring the Windows and Programs folder could "only" achieve 25% compression.
Attachment 118292
OK something very strange is happening with the power on hours with the V3.
As far as I can see the V2 is counting hours even if the power settings put the hard drive in a low power state.
The V3 does not seem to clock hours unless it is transferring files. (Even when the hard drive is not set to power down).
Maybe this is some sort of work around due to the power related problems with the SF2xxx drives that cause BSOD/ Freezing? I'm using f/w version 2.08.
Has anyone else noticed this on a V3?
It's not getting much rest :)
Great job on the SF2 controller, things are looking better than SF1, as I suspected.
For the lurkers, there is a new firmware for the SF2 Series drives, currently OCZ is the only one that has made it available for download.
That's very, very odd....and not a good sign of LTT depends on power-on hours (the only way to allow unthrottled writes is to write?). Do reads up the counter? If they did, they would at least be a stop-gap. I would be surprised if this is related to the BSOD/Freezing, though.
93.4% after 3 runs, still doing better than the 100% that NTFS managed, nice. Obviously won't out-do NTFS with 101%....NTFS doesn't resize it one byte but surely SF-2200 has a WA of greater than 1.00x.
I think I might have the ability to test a SF-2200 with my own C Volume and D Volume data sets to round out the charts, but probably not for a week or so. By then or slightly after that SF-1200 compression testing (taking 64x longer, sigh) will start and after that SF-1200 endurance testing.
Here are a couple of graphs to summarise the findings on compression.
The comparative compression graph shows a fixed level of compression for files generated by Anvils apps. These compression levels are a given and are then compared to what the SF controller can achieve.
Attachment 118300
Just updated to f/w 2.11 and it seems that the E6 raw 1 byte value for the life curve value is no longer available so I won't be able to monitor the life curve recovering :mad: (on this drive anyway) :D
Power on hours increase now, even after the hard drive power setting is activated. :rolleyes: (Unless something was accessing the disk I did not know about - unlikely)
Bummer that E6 disappeared, was really hoping to see that tick back upward with idle time working.
Not sure if you ever mentioned this, but what was the write speed in Anvil's app with LTT active? If it's anything like the 16-18MB/sec from AS-SSD, then LTT write speeds were almost surely faster than the lifetime slope. 16MiB/s = ~21 disk writes (with parity space, so 64GiB) per day, or 7700 P/E cycles a year, which is way more than what IMFT 25nm is rated for.
Same here. :shakes:
I just started up Anvils app to check MB/s. Screen shot below. My drive has a 1 year LTT AFAIK.
To further elaborate what happened after the f/w update. The Power on Count was reset to 1. At first the E6 life curve appeared as being 100. Cruel blow, as minutes later it changed back to 90. The drive is still throttled.
It doesn't look like idle time releases the throttling in the same way as the V2. Will know better tomorrow.
Attachment 118301
Attachment 118302
Did x09's raw values always have non-zero data a few bytes up from the 195? Is it possible E6 has just been completely depleted?
Here are couple of typical entries from post # 327
52 203 166 0 0 0 166
47 116 166 0 0 0 166
Coudl be. After 5 hours of idle the drive is still 100% throttled. That would not have been the case for the V2.
E6 had changed to 100, but as soon as I started running AS SSD it reverted to back to 90.
I'll leave it on idle overnight. If it's still throttled in the morning something is definitely not right.
It could also be possible you're way under the lifetime slope, in a bad way. 16-17MiB/sec is way too fast for 64GB, 3000-5000 P/E cycles, and integer years of lifetime. Lifetime setting could be very short though...hmmm.
I guess time will tell :shrug:
FW 2.11 looks to be the way to go but not if it makes the drive unusable, lets hope it's gets sorted out.
I just looked at all compression data and is a little fuzzy to me. For assuring some redundancy, the controller is writing additional data for each group of programmed pages and I always assumed that this kind of activity is also registered as flash writes. But from the test with incompressible data, (only 2% more when writing movies) it seems to me it is not registering everything. Also, the zero fill and 8% compressions are quite interesting. From their difference from 14.6% to 17% I would say that probably any file with a compression ratio below 7-8% will compress to same 14.6% (compression range for real life databases)
For me, it is clear that somehow the controller needs to cluster data together to be able to compress it but how is it doing that, its quite strange. Zero fill test should compress multiple pages in one as a best case scenario, so I would expect to see host writes difference as a multiple of flash writes and if one is divisible to 8, so it should be the other one (page size). In the earlier test, for 13 flash write increments there were 89 host writes and these values are not matching in any way on what I would expect.
I'm not sure parity writes are included in the NAND writes statistics, tbh.
And to figure it out, I think we need a longer/better test of incompressible data before citing 'real WA' as 1.02x. Though the SF-2200 would have to be very good to compress movies at all, the verdict is still out on 'real WA', partially due to a small sample size of just 54.5GB, IMO (not sure the best way to test this... probably 100s of GB of 101% from Anvil's App like the other compression settings).
As for how the SF-2200 handles compression, whatever the 'real WA' is of the SF-2200 actually would presumably need to be factored out of compression figures from Ao1's testing if you wanted to isolate the compression ability of the SF-2200. And regarding the 0-fill being 14.6% and 8% being 17%, There probably is some clustering causing inefficiency with 0-fill (with limited deduplication on top of that, I suspect), but I'm not sure how much of a concern that should be, both usage scenarios are pretty/very rare (especially 0-fill) and the compression is clearly effective on less compressible data sets.
My takeaway from the data is that about the only "real" data that Sandforce can compress significantly is Windows installations and application programs. It did little or nothing with documents, videos, pictures, or audio files. And even on the Windows and applications, the compression ratio was only 75% when measured by copying the folders over. The installation measurement found 50%, but I wonder if that is an accurate measurement (due to the small size of data written), or maybe the installation included writes of empty page files or empty hibernation files that were not included in the "Copy Windows and Program Folder".
The V3 has had at least 9 hours of idle time but it is still throttled. The main differences in throttling between the V2 & V3 are:
• The write speeds with the V2 were throttled to the same degree regardless of xfer size. With the V3 throttling varies depending on xfer size and is less severe for sequential xfers.
• With the V2 even a small amount of idle time was enough for throttling to be released until excessive writes reoccurred. With the V3 the drive has (so far) remained throttled.
With regards to compression testing the problem with testing more data is that once all the NAND has been written to WA kicks in and distorts the figure. That is why I SE'd before each run and tried to keep data xfers within the capacity of the drive. To test a wider range sample I would need a much larger drive. That said I doubt the results would change.
The take away from the testing undertaken by Jeffrey Layton in post #280 was "remember that you're not studying the compressibility of the data file as a whole but rather the chunks of data that a SandForce controller SSD would encounter. So think small chunks of data."
For typical client applications it's hard to see how compression works to achieve anything close to the "up to" specified speeds. The only way of telling is to monitor performance of real applications. I might just get another V3 to do this.
With regards to the power on count the hours seem to be too high now. Currently it is reporting 217 hours. 11 hours after post ~351 and power on hours have gone from 195 to 217 = 22 hours. Exactly double. :rolleyes:
Attachment 118329
Attachment 118330
Is that an assumption, or did you determine it through testing?
I think that write amplification should have little effect when you are doing large sequential writes, followed by deleting the large files, TRIM, and then more large sequential writes. I'd be surprised if there was any write amplification at all in such a circumstance.
So, all you need to do is create an archive (like RAR), with no compression, of all the files you want to test. Then you have one large file that can be used to test the compression via a large sequential write. Copy that file to the SSD (multiple times if it will fit), then delete the file and empty the trash (TRIM), then do it again. Repeat until you have enough data.
WA would be minimal for sequential writes, but it can't be excluded/ isolated from compression. Anvils app is generating small xfers that will incure an element of WA. My concern therefore was that more data would most likely be less accurate once WA became applicable.
I have a few ideas I'm working on, which will involve real files at various sizes, which I will test incrementally. :up:
~ 6 hours later and the power on count has gone from 217 to 230. The SF is working on double time. The good news is the drive is no longer throttled. Now I will see how much I can write before it becomes throttled again.
(John, I was refering to the testing I was doing with Anvils app r.e. WA).
Attachment 118339
Attachment 118340
It took less than 1 1/2 hours to throttle again running Anvils app. (259GB) I wasn't around for the last hour to see exactly when it happened.
There is a problem now however due to the way that power on hours are being reported. Do you believe the SMART value for a LTT calc or actual hours? I don't think I can do much more with this particular drive with regards to LTT.
Waiting 24hrs now that it hit the throttle state? :)
Yes. I think the important question is how much data can be written to the drive per day before LTT kicks in. The only question I have is how to determine that accurately. It is made especially difficult if it takes longer to throttle with sequential transfers. That seems to necessitate at least two independent series of tests. One for random writes and another for sequential. Maybe yet another for a 50/50 mix. Obviously the easiest test is the one that has been discussed. Based on the idea of a linear life curve which the firmware tries to stay below so that the longer you wait the more you can write. But is this anything more than a (very plausible) assumption?
It would be nice if we had a modified version of Anvil's app which monitors the E6 attribute and only writes when the drive is in an unthrottled state. Then you could simply monitor how much data can be transferred per day in unthrottled mode. You could even easily compare Sandforce drives from different manufacturers this way.
Even without E6 maybe a program could be written that monitors its own transfer rate and when it drops below a certain level it stops writing and goes into wait/test mode, periodically making small transfers to test if LTT is still engaged. When LTT is finally lifted it logs the time and starts writing again until LTT engages again. Every 24 hours it could log the total amount written and the number of LTT engagements.
The problem with doing it manually is the granularity issue. How often can you really check to see if the LTT has been lifted? Once an hour? If you assume consistent LTT=FALSE and LTT=TRUE time periods then you could start to narrow it down so that you know almost exactly when you would need to check for the LTT state change. Lets say the LTT=FALSE period is 90 minutes and the LTT=TRUE period is 9 hours so that you could write for 90 minutes, wait 9 hours and then write for another 90 minutes. You would just have to check Anvil's app at t=90 min and then again at t=10.5 hours and so on, until you observed a consistent pattern over at least a few cycles. If the linear life curve model is correct then the data written per day should be the same as just waiting 24 hours between write sessions.
Weird. E6 seems to now be reporting as pre 2.11. Power on hours are current 252. Still running at double time. (2 power on hours = 1 real hour).
E6 100
E6 raw values
0 2 84 0 0 0 100 (1byte)
E6 100
E6 raw values
0 4 107 0 0 0 100 (1byte)
Power on hours = 267
Maybe this "fast" time counting is to compensate the fact that average users are keeping their computers running for 8-12 hours a day. It's very important for users to be able to write at least X GiB of data per "usable" day and what is more important, the user should not be impacted by the needed idle time. The controller could even have a wild guess on how much should be the acceleration factor by looking at real power on hours count, total cycles of power on/off and total unexpected power failures.
What Sandforce based SSDs desperately need is a way to count power down time between power on cycles. This would probably achievable up to some point with some capacitors and an efficient counter
I think the fact that the warranty throttling is undocumented makes it easier for Sandforce and OCZ to sell SSDs with buggy firmware. After all, if no one knows what is supposed to trigger the throttling, then no one can claim it is not working properly. OCZ blames it on Sandforce being secretive, which is absurd since Sandforce has patented the technique (which isn't particularly innovative anyway) and so the fundamentals can be known to anyone simply by looking up the patent.
All that is needed is for OCZ to document the specifics for each SSD model: what triggers the throttling, how slow is the throttled performance, and what needs to happen for throttling to be released. This seems like an obvious thing for the SSD datasheet, since it directly impacts the functioning of the SSD and does not require disclosing any of Sandforce's trade secrets. But OCZ's representative dubiously claims Sandforce is keeping it secret, and no doubt Sandforce would say that OCZ is keeping it secret. It is like Laurel and Hardy selling SSDs.
I got the V3 on the 19th and its been powered up ever since.
In theory it should have around 264 hours. With f/w 2.08 it seemed to miss hours and with 2.11 it seems to have caught up and it is now exceeding the theoretical maximum hours the drive could have been powered up for.
Anyway, 24 hours of real time and the endurance app is up and running again. (46 hours in SF time)
E6 is counting down as writes increase on a linear basis.
0 5 204 0 0 0 100 37,484
0 5 3 0 0 0 100 37,700
04 253 0 0 0 100 37,717
04 194 0 0 0 100 37,787
It seems the first two digits countdown after the next digits run from 253 to 1 and count down by one for approx 1 GiB of data written.
So in theory 5 x 253 GB before 00 000 = ~ 1.2TiB / ~ 6 hours
Attachment 118388
In the absence of any better data I'm going to hang my hat on the figures below based on an average over the LTT warranty period and 3k PE cycles. On top of the initial credit it appears you also get credit days before throttling kicks in. i.e. you can get away with writing more than the max daily allowance as long as it then evens out over a period of days days. I'd guess no more than 7 days.
This is different to what SF state, as they say the PE count will never go below the life time curve, but it clearly does.
I'm hanging my hat on those figures based on the V2 1 year LTT and the ~7MB/s throttle speed, which gives around 220TiB on a MWI calculation. All those figures tie in well.
• 0.60TiB per day with 1 year throttle/ 25.6GB per power on hour/ 7.28MB/s
• 0.2TiB per day with 3 year throttle/ 8.53GB per power on hour/ 2.42MB/s
• 0.12TiB per day with 5 year throttle/ 5.12GB per power on hour/ 1.45MB/s
Ironically it looks like larger capacity drives (<100GB) will throttle more easily and more severely as they have 3 year LTT vs a 1 year throttle for drives >100GB.
An allowance of 1.2TiB per 24hrs of idle time works out to 438TiB per year or 7000 P/E cycles.
With a +/- 10%, that's 1.08TiB-1.32TiB per 24hrs.
An allowance of 1.08TiB per 24hrs of idle time works out to 6300 P/E, which is getting close to 1/2yr lifetime setting on 3000 P/E cycle NAND.
An allowance of 1.32TiB per 24hrs of idle time works out to 7700 P/E cycles, the exact same as 16MiB/sec throttle slope.
Also of note, the first two bytes from E6 raw seem to be acting as if they were a 2-byte number, not two separate 1-byte numbers. :)
Was 05 204 the starting value at the end of the 24hrs of idle? If so, then the 1.2TiB number isn't what is important, it's 05 204 = 05C7h or 1479d, or 1479GiB if it's 1:1.
EDIT: that 7.3MiB/sec after WA looks appropriate for a 1yr, 3000 P/E cycle 40GB V2 :)
05 204 was around the start.
Currently E6 is:
0 638 0 100 (2 byte)
0 2 120 0 0 0 100 (1byte)
Writes (E9) 38,443
E6 Start 1,479
E6 Current 638
Difference: 841
E9 Start 37,484
E9 Current 38,443
Difference: 959
Anvils app is still going strong. Guess it will do for another 638GiB.
So for every 1.14GB written to NAND, E6 moves 1 notch. How has F1 moved? Could LTT on SF-2200 be aligned with Host Writes and not NAND writes? :eek:
EDIT: 1479 * 1.14 = 1686GiB after 24hrs of idle = 601TiB/yr = 9600-11000 P/E cycles, depending how you count (cycle could be 60GB [available NAND], 64GB, 60GiB, or 64GiB [total NAND]).
Typical. Missed the exact time the drive became throttled, but it must have been close to 7 hours. (Looking at Anvil's Avg and current write speeds)
I'm still confident the writes per day in post #374 are about as close a guess as can be had.
F1 is 38,594 btw. I only normally use E9 for writes, but they are both very close anyway.
Attachment 118398
Attachment 118399
Ao1,
Select "Browse results" in the Endurance test and you will be able to spot when the decrease in speed started.
(all loops are recorded in a database)
Whoa, is power on hours moving properly during writes and moving at double-time when idle?
1339GiB for every 24hrs of idle, that's 477TiB/yr or 7600-8700 P/E cycles, depending on how you count (cycle could be 60GB [available NAND], 64GB, 60GiB, or 64GiB [total NAND]). With 3000 or 5000 cycle NAND, that's not even 1/integer of a year, interesting. Some arbitrary value was assigned?
Either way, 1339GiB a day (15.87MiB/sec) is a very, very large task to even enter LTT conditions. And it does look like 16MiB/sec LTT speed is really close to what you found.
Some SF-1200 compression numbers. 64GB resolution means these figures are rougher than the SF-2200's, but still indicative of how good the SF-1200 is at compression.
That said, it's way too early in the test to say the 8% figure has any merit, this was more of a report of the 0-fill. SF-1200 with 0-fill is worse than NTFS (but so was the SF-2200) but more importantly, it's also much worse than the SF-2200 which compressed it down to ~15% vs. ~24% of SF-1200. If (big if) the 8% Compression test figure stands at 33%, then the SF-1200 is also worse than NTFS (23.7%) at that setting...and more importantly, much worse than SF-2200 (~17%).
Attachment 118406
Interesting... the results contradict what Ao1 found out in posts #141 and #148 which is 12.5% for zero fill and 15% for 8% compressible results. In his 0 fill test, the resolution was worst than yours, but in 8% compressible results should be comparable. Also, is any chance for your drive to be misaligned? Your 4k random reads are identical with my drive's speed when it was misaligned.
I was coming out at 15% on the V2, but I had higher readings over a small data range. I suspect the 64GB refresh was the problem.
2.79TB of 8% fill.
#233 at start = 37,696GB
#233 after 2.79TB 8% fill = 38,144GB
Difference = 448GB
#241 at start = 37,120GB
#241 after 2.79TB 0fill = 40,000GB
Difference = 2,880GB
(I posted the Excel results for the V3 compression comparisons in the first post).
All I did was plug in the drive and format with Windows 7's Disk Management which should align it just fine (right?). Both AS-SSD and Anvil's Apps report it's aligned fine, but there's always a possibility that alignment is not okay.
I thought the random reads were symptomatic of Hynix 32nm's poor performance, but they could well be because of poor alignment.
EDIT: 8% setting is still compressing down to 33% after roughly double the sample size from a few posts up. This is odd that the two SF-1200s are so different :confused:
This is indeed strange... I usually check manually the starting sector number for alignment but I also knew that W7 disk management is doing it right. I will also do the 0fill test on my SSD (Vertex 2 120GB) and I will post the result later.
I just tested the 0 fill compression rate on my OCZ Vertex 2 120GB, but in a little unorthodox way. First, I wrote movies until E9 and F1 parameters incremented (manual refresh in Cristal Disk Info every 3-5s) and I noted the values. I run then Anvils Storage Utilities with 0 fill for 339.7 GiB and switched to a dumb Java program that wrote continuously on all available free space from the partition and logged result in increments of 2-3GiB (actually 2.44GiB but rounded to lowest int). In this time, I followed changes in Cristal Disk Info.
So here is what happen:
F1->3328
E9->1984 later, after around 13GB of movies written. After the increment I started the endurance test with 0 fill.
F1->3840 after 339.7GiB + ~151GiB = ~490GiB => SSD incremented the value with 512 for around 503-504GiB written
E9->2048 after 339.7GiB + ~153-159GiB = ~492-498GiB.
Based on what I counted it incremented with one 64 step for 492-498GiB =>Compression 12.8%
Based on what SSD seems to have counted, it incremented one step for 501-507GiB => Compression 12.6%
Considering that the SSD is running as a primary drive with OS and other programs that also wrote data in the same time (although I closed all unnecessary programs), I would estimate a compression ratio of 12.5-13% for 0 fill on my drive.
Attachment 118443Attachment 118444
Now, a little strange thing during Anvil's tool. The application wrote what it seems to be a lot of random data and according to HddSentinel, at the beginning of each burst of random writes, the peak speed was around 60-70MiB/s then suddenly settled for around 28MiB/s
Attachment 118445
Nice observations :) I'm retesting 8% fill on the V2. Currently the compressed rate is coming out at around 17% , but that is only on 400TB of host writes so far. 64Gb refresh periods are a pain :(
I continued testing 8% until I went to bed and it stayed pretty constant, finishing at 32%. It wasn't as large of a sample size as I wanted, so I will test it again (just ~1300GiB and I didn't do any of the micro-counting sergiu did).
I tested 25% for 12hrs and it was compressed to just 86%. SF-2200 was at 41% and both of your SF-1200s have been performing similarly to the SF-2200. :eh:
Your numbers for 0fill, 8% and 25% seems to be double the SF-2200 and SF1200 (for first two tests). I would say somehow is not aligned or internally its doing something that doubles the writes. Maybe some static data rotation? :confused: Could you do a short test with 4K incompressible random writes? If there's doubling, you should see almost twice the writes.
Also, regarding testing, I did the micro-counting because, being my main and only drive unfortunately I cannot let it wear until I see a significant increase in values. But you can easily achieve the same or much better precision if you have at least 20-30 increments (100 would be ideal for a 1% error margin).
This info is covered by SF under NDA, we asked if this could be made public but we were told NO.
You guys need to remember SF want to protect their IP here, you are now in fact *Possibly* reverse engineering that IP which could give a competing controller manufacturer an advantage.
Everything OCZ was able to tell has been told...this has been more than any other SF partner....we do our best to keep you in the loop but there are limits...which have now been hit.
I have no static data on my drive. I'll get to incompressible data soon enough.
I agree with you doing micro-counting, it's the fastest way to do it and it's accurate :) I'm out to kill this drive so I'm doing the normal way :)
I haven't tested my Vertex 3 yet, the table I'm showing is just your Vertex 3 numbers summed instead of an average of an average (which is why they're slightly off). Instead of averaging 15%, 16%, 15%, and 17% when the test sizes aren't the same (1, 1, 1, and 10), I'm just summing the total E9 and F1 growth dividing based on those numbers.
Looks like with the SF-1200, I'm doubling except at 0-fill. I've had 46% going for a few hours, let's see what that is at now.
No worries, we don't need to be given all the answers, more than half the fun is finding it out ourselves (or as best we can). :)
If I were SF, I'd say no to releasing the info too :p:
(as for us reverse engineering for a competitor--if a competitor wanted to do it, they've had a chance to get months (or even a year+) ahead of us and could put more resources into it than us)
EDIT: early in the test for 46%, only 640GiB of Host Writes, but the Compression/WA is 150% :eek: Ao1's SF-2200 was 70% so I seem to still be doubling the SF-2200 (and almost exactly doubling). Could it be my firmware is reporting double but not writing double?
Whoops, chart changed. TBH I only have confidence in my figures for the V3. I'm not sure how you are micro-counting, but it sounds like it is more accurate, so if your results conflict with mine I can take mine out or re-check them. Currently 46% is coming out at 80%, but without E9 & F1 being aligned its open to being inaccurate.
When I get time I'll consider how write speeds are linked to the compression ratios. That should provide a cross check.
For what it's worth. V2 46& fill = 75% after 0.5TiB of writes. Just after I stopped the E9 value changed, which would have given 83% if included.
That is because warranty throttling is not a feature, but a limitation. It does not benefit anyone except possibly OCZ (no need to explain to an RMA customer that the warranty does not apply because they wrote 1PB in 3 months).
And customers should avoid drives that are throttled. So basically what you are saying is that OCZ is selling an inferior product, but hiding that fact from potential customers.
:rofl: Right, you asked Sandforce if you could say that the drive gets throttled if you write a lot of data to it in a short time, and they said no, you cannot say that. Thanks, Hardy, that's good for a laugh. Sure, the fact that the drive gets throttled is reverse engineering, yup. You are a riot!
Tony's post is a little weird, I can't tell if he's being defensive or just paying you guys a compliment. I think it should be taken as a compliment either way though.
You guys are like the A-Team or MacGyvers of SSD testers, reverse engineering top secret classified information about controllers using paper clips, ballpoint pens, rubber bands, tweezers, nasal spray, and turkey basters! :D
Here is a comparison of write speeds using different levels of compression rates.
For sequential xfres there are three main bands
• 0% to 8% Compression = ~100% of max speed
• 25% to 46% = Compression = ~45% of max speed
• 67% to 100% Compression = ~30% of max speed
4K QD1
Performance approximately the same regardless of compression ratio
4K QD 4 & QD 16
Compressible 4K xfers are assisted with queue depth.
Attachment 118473
Attachment 118474
Attachment 118475
EDIT: I added an X25-M just to help show where compression might be helping.