Yes that's correct. I didn't realize the raw values in hex had changed, strange that CrystalDiskInfo does not translate the hex to decimal? Why does it say 0 there?
Printable View
Yes that's correct. I didn't realize the raw values in hex had changed, strange that CrystalDiskInfo does not translate the hex to decimal? Why does it say 0 there?
@ John
Those specs are for 25nm Mircon NAND, with a 1MiB block. I'd also guess multiple channels. Seems like the controller/ SATA interface add quite a bit of overhead. (Edit: easy to see why a clean drive can give really high benchmark figures)
@ SynbiosVyse
Function > advanced features > Raw Values > Select preference on how you want to see the values.
M225->Vertex Turbo 64GB Update:
60.84 TiB
MWI 59 (drops by 1 for every 50 raw wear)
2064 Raw Wear
116.81 MB/s avg for the last 29.13 hours (on W7 x64)
MD5 OK
Man, this drive flys on Win 7 :shocked:
bluestang, I'm tracking SMART attribute C7 (host writes in sectors), CF (average erase count), D0 (max erase count), D1 (MWI), and C3 (reallocation equiv, I think). If you could include those raw values (preferably in decimal) with each update, that'd be great. (also keeping tabs on speed and MD5 check, but that's not a SMART attribute, of course)
SynbiosVyse, I need the raw values (preferably in decimal) of SMART attributes E9 (NAND writes), F1 (host writes), B1 (wear range delta), and 05 (reallocation equiv, I think) from you, in addition to average speed, MD5 check, and E7 normalized value (MWI).
Doesn't matter if its a screenshot or written out, whatever is easiest for both of you :)
Also of note, your recent WA is down to just 1.22x (though I'm not certain about the numbers without the SMART attribute raw values). Before enabling TRIM, your WA was around 2.43x, so it seems TRIM has cut WA in half. :eek:
Those timings are at the NAND level. If a block is not clean, i.e. it contains a small of amount of valid data, that data has to be relocated before the block is erased, which adds time to the operation. Deciding where the data ends up is a processing task, which takes time to calculate. What I tried to say was that when a drive is in a fresh state it doesn't have to worry about any of that.
A secure erase can be executed in under ~ 2 seconds, so there must be a way to access and erase all blocks more or less simultaneously, so a block erase can be very fast if the controller does not have to be concerned about checking to see if data is valid, mapping data or managing WA.
When the drive is in a used state processing time is required, which adds overhead. As can be seen below that overhead can vary significantly depending on the state of the drive.
Attachment 118951
In general it is true that GC may need to relocate some flash pages before a block can be erased. But in the specific case of ASU endurance test on the Samsung 470, I think that there is little relocation needed. For two reasons -- the amount of random writes are relatively small compared to sequential writes, and it is set to keep 12GiB free. With few random writes, 12GiB free, and TRIM, the GC just does not have much work to do.
As for SE in 2sec, I think that is highly dependent on the SSD. I've had a SE take more than 30sec (I think it was an Intel G1). And I've seen references to it taking a minute or two. Some SSDs have built-in encryption, so all it has to do is generate a new encryption key and mark all pages invalid in the index, so that could be done very quickly. But for SSDs without encryption, it is not "secure" to just mark all the pages as invalid -- it needs to go through and erase all the blocks. So I can see that taking more than a few seconds (but still less time than it would take to write zeros to the entire SSD).
In principle I agree with you that an erase operation at the SSD level is likely to be significantly faster than host write speed capability.
Whilst I don't think the erase time is an issue for the WA calculation I'm still not sure how WA has been calculated. Somehow it does not seem possible that it can be so high, although the host write speeds imply that WA is being sacrificed to some degree.
It would be good if some of the key assumptions being made in the thread could be summarised in the first post.
It's interesting to see the difference TRIM has made to bluestang. Significantly faster write speeds and reduced WA.
Regarding SE, did anyone monitor writes during an SE?
I've checked the SF drives and there are 0 writes, the 2 second SE on the SF based drives is due to that it is SE'd by sending a specific voltage to the NAND.
--
236.73TB Host writes
Reallocated sectors : 6
MD5 OK
33.34MiB/s on avg (~60 hours)
I would not expect to see any increase in writes during a SE. Think that a cycle for a NAND cell is an erase followed by a program operation, so during SE you only have half of the cycle.
@johnw: could you run the endurance test for 20-30TB with trim disabled? I am curious to see what would be WA when the drive has no clue about what has been erased.
Also, about general write performance, it was specified earlier by Ao1 that programming a page takes normally 900μs. 8 dies * 4KiB * (1000ms/0.9ms) = ~34.7MiB/s which is much smaller than what a normal SSD can do. Does anybody know how many pages could be programmed in parallel for one die?
18.40 hours
18.8203 TiB written
58.20 MB/s
MD5 ok
E9 8832
EA/F1 19328
You were right :p Health finally dropped to 97% today.
05: 0
B1: 6
E7: 97%
E9: 8960
EA/F1: 19456
SynbiosVyse, something does not seem right. Are you running at 46% fill? It seems like you have racked up a lot of writes since your last update. Even if the last screenshot was taken just before the MWI turned to 99% it still seems a big drop to now be at 97%.
Sorry, don't mean to doubt you, but the data is quiet strange. Certainly very different to the OCZ drives.
You were right with your assessment previously. I was originally running 0-fill and changed to 46%.Quote:
E9 (if correct) is only showing 5,312 GiB of writes to NAND. F1, host writes, is showing 14,976 GiB. Were you running 0 fill? Now 46% fill?
My drive only has a few MiB of data as I had mentioned before and I set min GiB free to 1 GiB, quite a different setup than what you guys were running before. My goal was to kill this drive as fast as possible.
I did see 100% health yesterday and when I first looked at it today it was at 97%. Unfortunately if it was ever set to a value in between, I missed it.
I don't think it's that far off the V2 40GB. The F40-A had ~3.56TiB of NAND writes between 100 and 97 where the V2 40GB had ~6TiB (5.5TiB host writes * 1.1x) of NAND writes between 100 and 97. Assuming the V2-40 has 34nm 5000 cycle NAND and the F40-A has 25nm 3000 cycle NAND, it works out pretty well.
At this rate, it seems the F40-A is only 2 days away from LTT activating (if it's set to 1 year lifetime).:eh:
05: 0
B1: 6
E7: 97%
E9: 9472
EA/F1: 20096
22.26 hours
19.5930 TiB
58.23 MB/s avg
MD5 Ok
What are some indicators of LTT? Slow write speed?
Has anyone doing these tests ever achieved the point where they could not write to the drive at all?
I have another one of these drives (virgin) ready to go again if you guys want to see another test with more controlled parameters. :)
The write amplification is calculated as the ratio of sa177 raw * flash-capacity / writes-to-SSD. This of course assumes that sa177 raw is counting the average number of erase cycles of the flash in the SSD.
I'm not surprised that TRIM helped write speed and reduced WA. That is exactly what TRIM is supposed to do. By increasing the number of invalid blocks for GC to work with, performance is increased and write amplification is reduced since collecting invalid pages is more efficient when there is more "scratch" space. That is almost exactly the same thing as increasing over-provisioning in order to increase lifetime and help performance.
johnw, your reallocations are accelerating :yepp:
Attachment 118970
(disparity between the bottom of the pack and your reallocations made me switch to a logarithmic scale for that axis....and the line still appears nearly linear with a slope greater than 0, meaning acceleration)
How do you know the reallocations? Is that C4?
I don't think a program automatically directly follows an erase. My understanding is that "program" is a page-operation (basically a write to a page), and it can only be done on a page in a block that has been erased (cannot re-write or re-program a page). So it would make no sense to program the pages in a block after erasing the block, unless the SSD had actual data to write to the pages.
I would have tried it with TRIM disabled if we had thought of it a couple hundred TiB ago, but now I think the Samsung 470 is in deterioration (with sa178 moving quickly) and I do not want to disturb the conditions of the experiment now. Maybe someone else with a Samsung 470 can try that experiment.
As for flash write speed. The writes can be interleaved, possibly up to 5 per channel, but I think that requires more die than channels. For example, if there are 8 channels and 32 die, the writes can be interleaved 4 times, effectively increasing the write speed 4 times over the number you computed there. I think this may only be possible with synchronous flash, but I am not certain that it cannot be done with async flash (although async flash is slower at writes than sync flash, so there must be a reason for that).
Even with interleaving at 5 times, it does not explain 250-300 MB/s write speeds that can be achieved on 240-256GB SSDs using 8GiB flash die. There must be additional tricks beyond interleaving to increase the write speed.
Might as well...was holding off until reallocations of a drive (any drive) climbed, but no sense having that awkward dead space on the bottom when it can be future-useful and not awkward up top :)
Attachment 118971
05 and C4 raw values for your F40-A, I believe
I may be wrong, but I think that's the only indicator on an SF-1200 drive.
C300 Update
204.177TiB host writes, 31 MWI, 3452 raw wear, 2048/1 reallocations, MD5 OK, 62.5MiB/sec
SF-1200 nLTT Update
79.69TiB host writes, 50.69TiB NAND writes, 77 MWI, 811 raw wear (equiv), wear range delta 3, MD5 OK, 56.3MiB/sec
Correct. What I referred as a "cycle" is the idea that you cannot have a program unless the page is part of a block which was erased. Because there is normally no program operation during a SE, there will be no "cycle" and no write counting. Now what would be interesting is to have a counter for both block erases and page writes, because this would give us a 100% accurate write amplification number. Unfortunately, I saw no SSD to count both parameters.
M225->Vertex Turbo 64GB Update:
75.24 TiB
373 hours
MWI 54 (drops by 1 for every 50 raw wear)
2337 Raw Wear
114.26 MB/s avg for the last 66.5 hours (on W7 x64)
MD5 OK
Attachment 118996
433.397 TiB, 1175 hours, sa177: 1/1/35458, sa178: 55/55/452, 111.30 MB/s
The other unknown SMART attribute 235 is still at 99/99/2, just as it was when the SSD was fresh out of the box.
64GB Samsung 470
Yesterday I noticed that the Avg MB/s field had been creeping down very slowly, in the 0.01 digit each day. It was still 113.13, but it seemed like the speed was decreasing. I think that ASU was averaging the speed from the last time the program was first started, which was many days ago. So after yesterday's data, I restarted ASU so that the average speed reported would only be for the last day. So today it read 111.30 MB/s. I'll keep restarting ASU after each day's data so that the average is only for the last day, and reporting the speed.
Very interesting on the last two updates.
M225->Vertex Turbo 64GB's recent WA is still shrinking. It was 1.89x for the chunk of initial TRIM-enabled test, 1.228x for the 2nd chunk, and was 1.14x for the most recent chunk. It makes sense that net WA reduces as the proportion of low-WA writes increases, but the actual recent/instantaneous WA is still shrinking too :eek: :yepp:
I don't want to get too spammy with the charts (full chart update later today, I think), but the Samsung 470's reallocated sector count slope is starting to turn upward even with the logarithmic scale....death march has turned into a jog.
m4 update:
Been away for the weekend but my m4 has been working like a busy bee. :D
304.3266 TiB
1004 hours
Avg speed 89.46 MiB/s.
AD gone from 193 to 179.
P/E 5329.
MD5 OK.
Still no reallocated sectors
Attachment 119005
05: 0
B1: 9
E7: 94%
E9: 13952
EA/F1: 25728
49.32 hours
25.0284 TiB
58.38 MB/s avg
MD5 Ok
C300 Update
210.195TiB host writes, 29 MWI, 3553 raw wear indicator, 2048/1 reallocations, 62.45MiB/sec, MD5 OK
SF-1200 nLTT Update
84.938TiB host writes, 54.75TiB NAND writes, 75 MWI, 876 raw wear (equiv), 56.3MiB/sec, MD5 OK
Charts Updated :)
All bar charts are sorted by their respective equivalent of Writes So Far. SF-1200 MWI Exhaustion expectation is overly optimistic due to me still running compression tests (and 0-fill was redone after it went below 100MWI)....it will probably always be optimistic until MWI does deplete. SF-1200 observed WA is also optimistic, for the same reasons.
Host Writes So Far
Attachment 119011
Attachment 119012
Normalized Writes So Far
The SSDs are not all the same size, these charts normalize for available NAND capacity.
Attachment 119013
Attachment 119014
Write Days So Far
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Attachment 119015
Attachment 119016
Host Writes vs. NAND Writes and Write Amplication
Based on reported or calculated NAND cycles from wear SMART values divided by total writes.
Attachment 119017
Attachment 119018
Attachment 119019
M225->Vertex Turbo 64GB Update:
81.16 TiB
387 hours
MWI 51 (drops by 1 for every 50 raw wear)
2451 Raw Wear
110.71 MB/s avg for the last 15.5 hours (on W7 x64)
MD5 OK
Attachment 119025
EDIT: And just think, my CD - Max PE Count Spec should be 10,000 for this NAND.
05: 0
B1: 11
E7: 92%
E9: 16832
EA/F1: 29312
66.60 hours
28.5219 TiB
58.52 MB/s avg
MD5 Ok
m4 update:
311.0004 TiB
1026 hours
Avg speed 89.48 MiB/s.
AD gone from 179 to 175.
P/E 5447.
MD5 OK.
Still no reallocated sectors
Attachment 119035
@vapor
I'm wonder about something. How do you calculate the WA in the charts?
05: 0
B1: 12
E7: 92%
E9: 17792
EA/F1: 30464
72.15 hours
29.6435 TiB
58.54 MB/s avg
MD5 Ok
For the SF-1200s, I divide NAND writes by host writes reported by SMART attributes.
For the M225/C300/m4/Samsung470, I take ( reported raw wear ) / ( reported host writes (either SMART attribute or user reported) * 1024 / 64 ) (all drives are 64GiB NAND and 64GB available, so the 64 is applicable to all of them)
For the Intel X25-V with 5000 cycle NAND, I take ( 50 * (100-MWI) ) / ( user reported host writes * 1024 / 40 )
For the Intel 320 with 3000 cycle NAND, I take ( 30 * (100-MWI) ) / ( user reported host writes * 1024 / 40 ) (yes, it has 48GiB of NAND, but we don't know the parity scheme so only basing it on 40)
For instantaneous/recent WA, I just change the numerator and denominator to represent the change in the inputs from the most recent update(s).
All write amplification figures I list are just what WA appears to be from the data that's reported...if there's an error in the input, there's an error in the output.
Thank you! That fills up some of my knowledge gaps. :)
314TiB. 58 reallocations. MD5 OK.
The available reserve space and the erase fail count both dropped down to 99 from 100. This happened either at 57 or 58 reallocated sectors.
Extrapolated out that's 5600-5800, depending how they count...weird numbers :p:
Of course, it is pretty inappropriate of me to extrapolate just on the smallest change possible :lol:
On a somewhat related topic, I wonder how big an Intel sector is this context....an LBA sector, a page, a block, or some other unit?
@Anvil or Vapor-
could you guys update charts on post #1? i have been linking this quite a bit and it makes it easier for others to see results on first post :)
M225->Vertex Turbo 64GB Update:
87.51 TiB
402 hours
MWI 49 (drops by 1 for every 50 raw wear)
2571 Raw Wear
110.75 MB/s avg for the last 16.7 hours (on W7 x64)
MD5 OK
EDIT: C4 Erase Failure Block Count went from 0 to 1. (missed that, sorry)
Attachment 119048
450.872 TiB, 1223 hours, sa177: 1/1/36889, sa178: 43/43/574, 108.21 MB/s
The other unknown SMART attribute 235 is still at 99/99/2, just as it was when the SSD was fresh out of the box.
64GB Samsung 470
At this rate (sa178), may only have a few days left...
It is slowing down a little, too. Maybe because of less reserved space.
C300 Update
219TiB host writes, 26MWI, 3702 raw wear, 2048/1 reallocations, 62.5MiB/sec, MD5 OK
SF-1200 nLTT Update
92.625TiB host writes, 60.719TiB NAND writes, 72 MWI, 971.5 raw wear (equiv), 56.35MiB/sec, MD5 OK
Thanks for the reminder, will get to it after this post :)
Also of interest, your overall WA has decreased when sa178 started increasing. It was a very steady 5.14x before sa178 went on the move and now it's down to 5.11x overall. Since sa178 started moving, recent WA has been mostly steady at 5.075x.
Yes, pretty small change, but the considering the previous stability, the suddenness, and the timing of this change I find it very interesting.
Also, the reallocation line took a very noticeable turn upward...I agree that it may only have a few days left.
m4 update:
319.0516 TiB
1052 hours
Avg speed 89.49 MiB/s.
AD gone from 175 to 170.
P/E 5589.
MD5 OK.
Still no reallocated sectors
Attachment 119056
One_Hertz
FYI
http://communities.intel.com/thread/24205
Intel has the firmware update available if interested. It took me about 10 minutes total.
Mike
M225->Vertex Turbo 64GB Update:
93.58 TiB
416.5 hours
MWI 47 (drops by 1 for every 50 raw wear)
2687 Raw Wear
110.92 MB/s avg for the last 16 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count(Realloc Sectors) at 1 (BANK 6, Block 2406, Erased 2552 times)
Attachment 119067
05: 0
B1: 17
E7: 87%
E9: 24576
EA/F1: 38912
F2: 64
112.97 hours
37.8914 TiB
58.65 MB/s avg
MD5 Ok
05: 0
B1: 17
E7: 87%
E9: 25536
EA/F1: 40128
F2: 64
118.50 hours
39.0078 TiB
58.66 MB/s avg
MD5 Ok
m4 update:
326.4213 TiB
1076 hours
Avg speed 89.51 MiB/s.
AD gone from 170 to 166.
P/E 5718.
MD5 OK.
Still no reallocated sectors
Attachment 119095
C300 Update
225.165TiB host writes, 24 MWI, 3803 raw wear indicator, 2048/1 reallocations, 62.9MiB/sec, MD5 OK
SF-1200 nLTT Update
97.38TiB host writes, 64.406TiB NAND writes, 70 MWI, 1030.5 raw wear (equiv), raw wear delta 3, 56.35MiB/sec, MD5 OK
244.04TB Host writes
Reallocated sectors 6
MD5 OK
Been travelling this week, will try to catch up this weekend :)
m4 update:
330.1559 TiB
1088 hours
Avg speed 89.51 MiB/s.
AD gone from 166 to 164.
P/E 5784.
MD5 OK.
Still no reallocated sectors
Attachment 119108
Next update monday afternoon. I'm going away for the weekend.
M225->Vertex Turbo 64GB Update:
Just hit 100.00 TiB :up:
431.8 hours
MWI 44 (drops by 1 for every 50 raw wear)
2809 Raw Wear
110.85 MB/s avg for the last 16.9 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count(Realloc Sectors) at 1 (BANK 6, Block 2406)
Attachment 119114
Attachment 119113
I wish I was still on the FW that had the 10,000 P/E Cycles listed for this 51nm Samsung NAND.
Then MWI would be lasting ~350+ TiB on this drive :)
C300 Update
231.027TiB host writes, 22 MWI, 3901 raw wear indicator, 2048/1 reallocations, 63MiB/sec (it's going up?), MD5 OK
SF-1200 nLTT Update from earlier today
102.94TiB host writes, 68.72TiB NAND writes, 68 MWI, 1099.5 raw wear (equiv), 56.4MiB/sec (also going up?), MD5 OK
I'd be surprised if there were more than 18 hours left (from your most recent update/post).
And on top of that it seems like the drive is still poised to live after SA177 bottoms out, sheesh :p:
I wonder if the slight slowdown is something to do with the graph below.(The graph is based on SLC with 100K PE).
Either way, I can't wait to find out what happens when the NAND can no longer erase or hold a change. If the NAND can't erase how can it hold a charge? What happens when data can no longer be erased or loses it's charge? Will the OS become unstable? Will SMART pop up with a warning and then render the drive read only? :shrug:
Attachment 119135
C300 Update
233.917TiB host writes, 21 MWI, 3950 raw wear indicator, 2048/1 reallocations, 63MiB/sec, MD5 OK
SF-1200 nLTT Update
106.063TiB host writes, 71.156TiB NAND writes, 67 MWI, 1138.5 raw wear (equiv), wear range delta 3 (still), 56.45MiB/sec (still speeding up...), MD5 OK
holy cow maybe there will be some excitement! i want to see smoke LOL ;)
05: 0
B1: 23
E7: 82%
E9: 33024
EA/F1: 49472
F2: 64
163.77 hours
58.67 MB/s avg
MD5 Ok
Very interesting that the wear range delta for the SF-1200s with no static data is so much worse than with static data. I know with mine it got to 8 when I had no static data and dropped down to 3 (and has since stayed there) once I added ~35GiB of static data.
With yours, SynbiosVyse, wear range delta has been climbing steadily the entire time (up to 23 now) and you have very little static data.
Is that because throughout the drive more blocks are being written to? With 20-30% static data there are regions of the drive that are essentially dormant during these tests.
What effects does the wear range delta have (if any) on performance, life, etc?
One would expect wear range delta to be higher with static data but the opposite is true so far. My SF-1200 is ~63% full with static data and my wear range delta shrunk when I added it. You have nearly 0 static data and yours continues to grow. With static data, NAND seems to be evenly used; without static data, NAND doesn't seem to be evenly used.
It's almost like it expects static data as it tries to do wear leveling. :shrug:
Hard to know the effects of wear range delta...we don't know the units. Is it a percentage between 90th percentile usage and 10th percentile usage? Is it P/E cycles difference between 99th percentile usage and 1st percentile usage? If it's just P/E cycles between most used and least used, a difference of 23 cycles between least and most isn't a big deal when the average is 688 (as yours is now). If it's 23% wear difference between most/least, that's pretty notable and probably will have an effect on endurance. If it's 23% difference between 90th and 10th percentile, that could be really bad for endurance.
Also of note from your F40-A, LTT hasn't kicked in yet even though ~20% of your NAND's minimum rated lifetime has been used in less than a dozen days. Seems lifetime was set for something very short, maybe even impossibly short (basically deactivating LTT). And even if it kicks in, write speeds may not drop that much.
This seems counter-interuitive to me. If you have static data, wouldn't the NAND be unevenly used since some of the drive is dormant, hence, lower wear range delta? There will be a portion of the drive undergoing very few, if any, P/E cycles.
If you don't have static data, the drive will be evenly used because it is utilizing practically the whole drive.
477.199 TiB, 1298 hours, sa177: 1/1/39010, sa178: 4/4/968, 103.76 MB/s
The other unknown SMART attribute 235 is still at 99/99/2, just as it was when the SSD was fresh out of the box.
64GB Samsung 470
sa178 almost there....geez sa235 still hasn't moved, what do we have to do to get that one to change??
If I remember correctly the delta between most-worn and least-worn was higher on the V2 when I used static data and it remained lower on the V3 with no static data. SF drives rotate static data, so presumably that would increase WA.
The really weird thing though is that the drive has not hit LTT.
Could you please run Anvil's app on 100% incompressible data just to make sure LTT has not kicked in and is being masked by compression. I'm sure write speeds would have dropped if LTT had kicked in, although at 46% it would not be by as much as I saw.
This is a retail drive?
Off topic a bit, but I've noticed that one of my Intel drives has gone from 98% back to 100% over the 12 months or so that I switched it out from being an OS drive to a drive that just contains static data. Quite strange, but it has definitely reverted back to 100%.
I don't have a good explanation, I'm just translating what the data says and it says with static data, NAND seems to be evenly used; without static data, NAND doesn't seem to be evenly used.
I don't think any modern SSD doesn't actively rotate static data, however; it's extremely important for wear leveling. Even the 'old' Indilinx M225->Vertex Turbo has its reported wear range (based on max, average, and min reported P/E cycles) stay fairly constant over time....the Max Wear started at 255 cycles above Avg (257 vs. 2) and now Avg is at 2809 and Max is just 332 cycles higher than that.
I think that's exactly what's going on.
With my compression tests I observed a WA of .732x with the 46% setting with no static data and 12GiB free. With the 46% setting, 34.85GiB of static data, and 12GiB free WA has been .782x. With the F40-A with no static data and 1GiB free, it's been .803x. So the F40-A has the highest WA and a growing wear range delta--maybe it's the 1GiB free setting? :shrug:
Agreed that it's very odd LTT hasn't activated yet. But the F40-A has been writing to NAND at 47.1MiB/sec, so I really doubt LTT will be hiding behind compression considering the NAND write speed is high and unchanged :eh:
I wonder if it has ANYTHING left to give :D
I found this dialog up when I checked this afternoon:
Attachment 119148
Here is how the ASU stat screen looked at that time:
Attachment 119149
As I found the system above, the Samsung drive was completely unresponsive. I could not even read the SMART parameters or view any files with Windows explorer. I tried rebooting, and the BIOS did not even see the SSD. Then I powered down for a minute, and when I powered back up the BIOS could see the Samsung, and Windows could also see it. I was able to read the SMART parameters:
Attachment 119150
I was able to verify the MD5 checksum for the ~40GB static data file, and it was fine. Also, ASU never detected any MD5 errors. So it seems the data has been, and continues to be intact on the SSD.
I tried running AS-SSD and it actually managed to write a little to the SSD, but then it locked up. I killed it with Task Manager, but the SSD was unresponsive to anything again, not even reading SMART parms.
Attachment 119152
I tried to shutdown but windows locked up at "shutting down" message and I had to cut the power. When I powered back up, the Samsung was again responding to reads and SMART parm queries, and it could even write a little bit but very slowly and most writes were failing I think:
Attachment 119151
Note that the first ASU screenshot shows 478.037 TiB, but the second one shows 477.212 TiB. I think the first one is correct. The second one is lower because I don't think ASU was able to save the last bit of writes when it quit the first time, so the second time when it loaded the writes it was missing some of them.
So that 478.037 TiB comes to 525.607 TB (I prefer TB to TiB). That is equal to 8212 times writing the 64GB capacity of the drive. Which is equivalent to writing the entire 64GB capacity of the drive, every day, for 22 years. Or to writing 287GB every day for 5 years.
Anyone have any ideas on anything else I should try with the Samsung 470? Keep in mind that Windows seems to be unstable when the Samsung is connected now when any program tries to write to the Samsung -- freezes and lockups. But it seems fine as long as there are only read requests sent to the Samsung.
My plan is to keep the drive unpowered now, and plug it in periodically (every few weeks or once a month) and try to verify the MD5 checksum on the 40GB static data file.
I wonder if the freezer trick works at all for SSDs :p:
Shocked the death was so sudden too...and before SA178 was depleted. It's all extremely interesting :D
(of note, I also had the ASU log for my C300 'cropped' when I cut power)
EDIT: guess I'll do a special edition of the charts tonight (wasn't planning on it) in honor of the first successful SSD death :)
EDIT2: also of note, the Intel 320 40GB with 'worse' 25nm NAND has done more normalized writes than the Samsung 470 did in its lifetime. Normalized writes scale ~1.09:1 with P/E cycles, assuming 1.00x WA. This is some minor evidence supporting the apparent WA of ~5.11x of the 470. In addition, Intel 320 reallocations are over a magnitude lower (and started much, much earlier). Of course, the Intel 320 also has parity, the NAND isn't the same, the sample size is very small, etc., so this is hardly hard evidence of the >5x WA, but it is some soft evidence. :)
amazing, good show! your efforts and coin are appreciated, she was a good SSD, one that we will all remember fondly!
and that would be the one with the worst WA (is that still a valid comment? what would the WA work out to be in the end?)
if it is 5 then others could be in for a lonnnngggg haul :)
EDIT: seen ^^ post above...dont know how i missed that. wow!
Nice to see some good action finally.
Yeah baby! Some action finally! Samsung has proved to be a worthy SSD! I am extremely pleased with the fact that you can read the file and verify checksum. So, it works the way SSDs were advertised to work. Of course, 500+TB of data written to 64GB, that's a lot for me! And written at a pretty good speed of 100+MB/s. Hats off!
@johnw...thanks for your hard work :clap:
Can't wait to see ho is next :)
Charts Updated :)
All bar charts are sorted by their respective equivalent of Writes So Far.
Host Writes So Far
Attachment 119154
Attachment 119155
Normalized Writes So Far
The SSDs are not all the same size, these charts normalize for available NAND capacity.
Attachment 119156
Attachment 119157
Write Days So Far
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Attachment 119158
Attachment 119159
Host Writes vs. NAND Writes and Write Amplication
Based on reported or calculated NAND cycles from wear SMART values divided by total writes.
Attachment 119160
Attachment 119161
Attachment 119162
The scary thing for the rest of us is if all our drives have the same lifetime : MWI endurance ratio.
478TiB : 59.5TiB is roughly an 8:1 ratio :eek:
No other drive is even at a 2:1 ratio...and neither of the oldest (Intel) drives are past 5:3 ratios yet.
05: 0
B1: 24
E7: 81%
E9: 34560
EA/F1: 51456
F2: 64
173.16 hours
58.66 MB/s avg
MD5 Ok
Stopping and switching to 0-Fill.
I feel in some sense that http://www.youtube.com/watch?v=19yjo...eature=related is suitable for this situation
(taps)
Hi SynbiosVyse, this is a retail drive? If it is then it would appear that not all SF drives are set for life time throttle after all, which is contrary to claims made by OCZ.
If you switch to 0 fill you will not see a drop in write speeds if LTT kicks in. With 100% incompressible you will see the full impact of LTT; if it is activated.