One hour of idle time later. I did not run a SE again, this is only due to idle time
Now I will restart Anvils app to see how long it take to get throttled.
Attachment 116726
Printable View
One hour of idle time later. I did not run a SE again, this is only due to idle time
Now I will restart Anvils app to see how long it take to get throttled.
Attachment 116726
I had to go out after leaving Anvils app to run. Within an hour the drive was throttled again.
Attachment 116728
Throttled state
CDM non-compressible data & 0fill. (0fill ran after non-compressible data)
Attachment 116733
Attachment 116734
Throttled state, different benchmark
EDIT: SHOULD READ NON COMPRESSIBLE
Attachment 116735
So sequential reads are throttled too. Down to about 1/3, but random reads are unaffected. Interesting. I have to assume that it is unintentional. There's obviously no reason to throttle reads. So for some reason they weren't able to throttle writes without throttling reads. I wonder if that might give some clue to how their throttle algorithm works.
Am I the only one who is chafing at the bit, dying to get a look at the firmware code? I wonder where it is stored. Probably in some flash inside of the Sandforce chip itself and not in a separate EEPROM or on the "reserved" portion of the OEM supplied flash memory. According to OCZ the firmware data is not only readable, but writable from software, presumably PC software. I believe a customized version of mptools was mentioned. So the data must be addressable/retrievable/dumpable by the computer somehow.
This might be a silly question but is it only SandForce drives that suffer from lifetime throttling or all SSDs?
Here is my Vertex LE smart data. this drive has been used a little over a year on my 24/7 rig. no special stuff, just put in. i dont really game on this rig, more of an email/browser and Newz rig. use it for tons of normal user stuff, intentionally never benched. This rig is off limits to my tinkering *for the most part*. comments?
http://i517.photobucket.com/albums/u...e1/smartLE.png
IIRC, Tony has stated that the Vertex LE is the only SandForce drive which has a completely transparent SMART statistic for life throttling. As in it will actually tell you explicitly when it is in the throttled state. Unfortunately I don't think he said which SMART attribute it was. But I already see some interesting ones there like "wear range delta". Instead of running Anvil's app for a long time, you could just run it for a short time to see if the drive does anything interesting. The SF-1500 was considered "enterprise" class I believe. So it could be interesting for getting a better look up SandForce's skirt. How much would you be willing to abuse it in the name of science? I'd at least take a close look at every smart value you can find and see how they change with use.
@ gojirasan. SMART Attribute 230 (E6) will tell you if the drive is throttled, but that attribute is hidden from view. Seems that is the case for the LE as well.
It seems that 0fill is quite literally 0 fill on the NAND. Need to do some more checks, but it seems all the SF records is 0 x # 0's. (a few bytes of data)
All you are benching is how fast the SF processor can work with zero's.
@ Comp, your workload shows that you are hardly getting any benefit from compression. You are also writing a lot! 3.92GB per power on hour.
Power on hours contains more than just hours, more like a timestamp. I'll post one using smartmontools which shows how it can be interpreted :)
@ct
I'd upgrade the firmware, my LEs are all at 1.33 and performance is just as great as the old FWs, can do a test for you.
Pretty strange that your "NAND" writes vs Host writes is close to 1 and that you are writing more than you are reading, do you reimage?
What annoys me is that the old drives will only increase usage attributes every 64GB, SF2 updates every GB.
We need more SF owners to post their SMART values with a brief outline of how they use their SSD to get a better overview.
Brahmzy any chance of posting? Anyone else?
Off topic, but after the drive got throttled I started getting shed loads of read errors
31 minutes to run a CDM benchmark! (4GB 1 pass)
Attachment 116766
I cant recall if there were such issues (resets) in the early fw updates, not impossible at all.
This is the readout using smartmontools
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 120 120 050 Pre-fail Always - 0/0
5 Retired_Block_Count 0x0033 100 100 003 Pre-fail Always - 0
9 Power_On_Hours_and_Msec 0x0032 100 100 000 Old_age Always - 241h+08m+33.550s
...
strange that it seems to be writing at such an amazing rate...i have only imaged the drive twice. both times there were OS issues for one reason or another. i have never updated FW because it seems to sometimes make things worse with some drives...you know the old saying: if aint broke, dont fix it :)
i am considering flashing it now though, as the new LE FW seems to be pretty mature now.
NOTE: forgot to mention, this is one of those "enhanced" version ssd that allows more capacity. wonder if this is rearing its ugly head?
Vertex2 50GB used in my work laptop(Lenovo T500, ICH9) running Win7 x64. I use a page file and almost always put the machine to sleep. The software I use the most is Outlook, Firefox with cache in RAM, vSphere client, and system connectivity such as RDP, putty and all manner of VPN clients.
Just a few read errors. :D
Code:=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 100 100 050 Pre-fail Always - 0/3552567
5 Retired_Block_Count 0x0033 100 100 003 Pre-fail Always - 0
9 Power_On_Hours_and_Msec 0x0032 100 100 000 Old_age Always - 1372h+41m+53.950s
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 603
171 Program_Fail_Count 0x0032 000 000 000 Old_age Always - 0
172 Erase_Fail_Count 0x0032 000 000 000 Old_age Always - 0
174 Unexpect_Power_Loss_Ct 0x0030 000 000 000 Old_age Offline - 31
177 Wear_Range_Delta 0x0000 000 000 000 Old_age Offline - 0
181 Program_Fail_Count 0x0032 000 000 000 Old_age Always - 0
182 Erase_Fail_Count 0x0032 000 000 000 Old_age Always - 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
194 Temperature_Celsius 0x0022 030 030 000 Old_age Always - 30 (Min/Max 30/30)
195 ECC_Uncorr_Error_Count 0x001c 100 100 000 Old_age Offline - 0/3552567
196 Reallocated_Event_Count 0x0033 100 100 000 Pre-fail Always - 0
231 SSD_Life_Left 0x0013 100 100 010 Pre-fail Always - 0
233 SandForce_Internal 0x0000 000 000 000 Old_age Offline - 448
234 SandForce_Internal 0x0032 000 000 000 Old_age Always - 448
241 Lifetime_Writes_GiB 0x0032 000 000 000 Old_age Always - 448
242 Lifetime_Reads_GiB 0x0032 000 000 000 Old_age Always - 1472
here is my latest response to your cause here. Trying to help as I can but won't be puuled into heavy debates about the inner workings of this controller. Been there.. done that..and no offense whatsoever is intended but I don't have time to revist it everytime someone thinks they have it figured out and I'm wrong again. To me it's always been more about the "what".. than the "why" with these controllers. The 6G drives are much improved and should not be lumped together with these older gens algorithms for anything more than a reference point.
http://www.ocztechnologyforum.com/fo...l=1#post650516
Here are my smart stats from the toolbox for one of my 6 V2 drives. Hope they can help anyone who has the time or intestinal fortitude to figure them out. I don't have either and wasted enough of my time(and drives life) to figure that gen controller out. lol
http://img232.imageshack.us/img232/4656/v2smartdata.jpg
I will say this quickly though. There is MUCH confusion about the "time calculations" being used to determine when lifetime throttling will be implemented. Time has absolutley nothing to do with it(at least at this level of throttle, although "hammered states" may very well rely on the time/data size correlation). It has to do strictly with the capacity of the drive in question. EVERY first gen Sandforce drive WILL throttle IMMEDIATELY when all nand has been touched because the required GC map has been fully formed. Write compressible data?.. takes more to hit it all. Write incompressible data?.. takes nearly the same amount of logically written data to hit the same physical space. This is where some of the confusion from compression algorithms muck the result up for those trying to measure throttling by the amount of data written.
If you SE a drive that was previously throttled and the drive were to slow down before all capacity had been rewritten once more?.. then you have other issues at play. Won't even begin to speculate as to why that would occur(though I did a bit in the above link) BUT you would surely be sending the wrong man(Durawrite) to prison for that crime. Durawrites maps are extremely consistent and ALL blocks must be written(though, they do NOT have to be full blocks) for the throttling to be implemented again. Sandforce fact regardless of Sandforce model(first gen only) or capacity.
TRIM is also highly misunderstood on these controllers and anyone who see's benefits from its use on other controllers is easily confused when trying to relate it to these drives. It's VERY lazy and most of the blocks just get marked and set aside/mapped for later recovery during GC time. Surely not the smartest way to do it and MANY have complained in the past. This is one of the greatest advancements on the newer 6G controller along with a larger recycling engine which was surely needed for it(immediate TRIM released blocks) to even become a possibility.
Also worth a mention that I found the latest firmware revisions to have actually recalculated the lifespan based on Sandforce's own internal testing. So those who are comparing these metrics will want to be on the latest firmware revision(1.33). One of the rare occassions that Sandforce reps ever showed their faces in public by starting a thread/replying on the OCZ forums. I was even called "astute" for the catch in reported lifespan changes. Was nice to hear for a change as most just call me "astupid". LOL
230/E6 is only available on SF-1500/2500/2600 series controllers/firmware
Its only the lowest 4 binary nibbles of that raw value that are power-on hours.
The upper 4 binary nibbles of that raw value are the number of milliseconds since last Power-on hours update.
Anvil my V2 also only reports every 64GB :(
I used your app to generate 1TB of 0fill writes. Those figures for #233 must be within 64GB.
#233 at start = 37,184GB
#233 after 1TB 0fill = 37,312GB
Difference = 128GB
#241 at start = 35,776GB
#241 after 1TB 0fill = 36,800GB
Difference = 1024GB
Attachment 116780
Attachment 116781
Attachment 116783
Seems the TRIM hang is more severe with 0fill btw.
Attachment 116784
So this leads to chunks of 32KB, best case scenario, it spares 7 pages out of 8. That's minimal write amplification of 0.125. As for trim, could you disable it and try to run the endurance test without it? I have a wild guess that the drive does not behave so well with the command.
Your guess would probably be correct as far as the small lags you guys are seeing is concerned. The more the drive must do?.. the more overhead and losses would result.
Everyone interested in Sandforce should also keep in mind that just beecause TRIM commands are sent/marked immediately?.. has absolutely squat to do with "when" the controller decides to recover/release those back into the fresh block pool. TRIM released blocks are more often then not recovered during low activity idle times. Often called "lazy TRIM" by some users. Hence, the excellent GC recovery that's found with ocassional logoff idles with power remaining to the drive(S1 sleeps).
@ groberts101
According to my guestimates in post #27 you will need to write at least 10.75GB per "power on hour" to see throttling in the form I have experienced. You are nowhere close to that based on your SMART data. (I actually wrote ~28,352 GB in 168 power on hours)
If a SE has restored the performance of that particular drive it is unlikely it had anything to do with this form of throttling that I have experienced.
Also throughout running Anvils app there was not a significant slow down in performance. Overall performance was constant UNTIL throttling kicked in. When that happened write speeds dropped very quickly. (Over a few seconds)
Now you might say that for whatever reason my numerous attempts to SE using three different apps across 2 PC's was a failure.
Here is the kicker. Immediately after a SE write speeds were throttled. If the drive was degraded or throttled because the SE failed, why were speeds restored with nothing more than 1 hour of idle time?
It is not so wild :) I proved the hang was TRIM related over in the Endurance thread.