Yes that's correct. I didn't realize the raw values in hex had changed, strange that CrystalDiskInfo does not translate the hex to decimal? Why does it say 0 there?
Printable View
Yes that's correct. I didn't realize the raw values in hex had changed, strange that CrystalDiskInfo does not translate the hex to decimal? Why does it say 0 there?
@ John
Those specs are for 25nm Mircon NAND, with a 1MiB block. I'd also guess multiple channels. Seems like the controller/ SATA interface add quite a bit of overhead. (Edit: easy to see why a clean drive can give really high benchmark figures)
@ SynbiosVyse
Function > advanced features > Raw Values > Select preference on how you want to see the values.
M225->Vertex Turbo 64GB Update:
60.84 TiB
MWI 59 (drops by 1 for every 50 raw wear)
2064 Raw Wear
116.81 MB/s avg for the last 29.13 hours (on W7 x64)
MD5 OK
Man, this drive flys on Win 7 :shocked:
bluestang, I'm tracking SMART attribute C7 (host writes in sectors), CF (average erase count), D0 (max erase count), D1 (MWI), and C3 (reallocation equiv, I think). If you could include those raw values (preferably in decimal) with each update, that'd be great. (also keeping tabs on speed and MD5 check, but that's not a SMART attribute, of course)
SynbiosVyse, I need the raw values (preferably in decimal) of SMART attributes E9 (NAND writes), F1 (host writes), B1 (wear range delta), and 05 (reallocation equiv, I think) from you, in addition to average speed, MD5 check, and E7 normalized value (MWI).
Doesn't matter if its a screenshot or written out, whatever is easiest for both of you :)
Also of note, your recent WA is down to just 1.22x (though I'm not certain about the numbers without the SMART attribute raw values). Before enabling TRIM, your WA was around 2.43x, so it seems TRIM has cut WA in half. :eek:
Those timings are at the NAND level. If a block is not clean, i.e. it contains a small of amount of valid data, that data has to be relocated before the block is erased, which adds time to the operation. Deciding where the data ends up is a processing task, which takes time to calculate. What I tried to say was that when a drive is in a fresh state it doesn't have to worry about any of that.
A secure erase can be executed in under ~ 2 seconds, so there must be a way to access and erase all blocks more or less simultaneously, so a block erase can be very fast if the controller does not have to be concerned about checking to see if data is valid, mapping data or managing WA.
When the drive is in a used state processing time is required, which adds overhead. As can be seen below that overhead can vary significantly depending on the state of the drive.
Attachment 118951
In general it is true that GC may need to relocate some flash pages before a block can be erased. But in the specific case of ASU endurance test on the Samsung 470, I think that there is little relocation needed. For two reasons -- the amount of random writes are relatively small compared to sequential writes, and it is set to keep 12GiB free. With few random writes, 12GiB free, and TRIM, the GC just does not have much work to do.
As for SE in 2sec, I think that is highly dependent on the SSD. I've had a SE take more than 30sec (I think it was an Intel G1). And I've seen references to it taking a minute or two. Some SSDs have built-in encryption, so all it has to do is generate a new encryption key and mark all pages invalid in the index, so that could be done very quickly. But for SSDs without encryption, it is not "secure" to just mark all the pages as invalid -- it needs to go through and erase all the blocks. So I can see that taking more than a few seconds (but still less time than it would take to write zeros to the entire SSD).
In principle I agree with you that an erase operation at the SSD level is likely to be significantly faster than host write speed capability.
Whilst I don't think the erase time is an issue for the WA calculation I'm still not sure how WA has been calculated. Somehow it does not seem possible that it can be so high, although the host write speeds imply that WA is being sacrificed to some degree.
It would be good if some of the key assumptions being made in the thread could be summarised in the first post.
It's interesting to see the difference TRIM has made to bluestang. Significantly faster write speeds and reduced WA.
Regarding SE, did anyone monitor writes during an SE?
I've checked the SF drives and there are 0 writes, the 2 second SE on the SF based drives is due to that it is SE'd by sending a specific voltage to the NAND.
--
236.73TB Host writes
Reallocated sectors : 6
MD5 OK
33.34MiB/s on avg (~60 hours)
I would not expect to see any increase in writes during a SE. Think that a cycle for a NAND cell is an erase followed by a program operation, so during SE you only have half of the cycle.
@johnw: could you run the endurance test for 20-30TB with trim disabled? I am curious to see what would be WA when the drive has no clue about what has been erased.
Also, about general write performance, it was specified earlier by Ao1 that programming a page takes normally 900μs. 8 dies * 4KiB * (1000ms/0.9ms) = ~34.7MiB/s which is much smaller than what a normal SSD can do. Does anybody know how many pages could be programmed in parallel for one die?
18.40 hours
18.8203 TiB written
58.20 MB/s
MD5 ok
E9 8832
EA/F1 19328
You were right :p Health finally dropped to 97% today.
05: 0
B1: 6
E7: 97%
E9: 8960
EA/F1: 19456
SynbiosVyse, something does not seem right. Are you running at 46% fill? It seems like you have racked up a lot of writes since your last update. Even if the last screenshot was taken just before the MWI turned to 99% it still seems a big drop to now be at 97%.
Sorry, don't mean to doubt you, but the data is quiet strange. Certainly very different to the OCZ drives.
You were right with your assessment previously. I was originally running 0-fill and changed to 46%.Quote:
E9 (if correct) is only showing 5,312 GiB of writes to NAND. F1, host writes, is showing 14,976 GiB. Were you running 0 fill? Now 46% fill?
My drive only has a few MiB of data as I had mentioned before and I set min GiB free to 1 GiB, quite a different setup than what you guys were running before. My goal was to kill this drive as fast as possible.
I did see 100% health yesterday and when I first looked at it today it was at 97%. Unfortunately if it was ever set to a value in between, I missed it.
I don't think it's that far off the V2 40GB. The F40-A had ~3.56TiB of NAND writes between 100 and 97 where the V2 40GB had ~6TiB (5.5TiB host writes * 1.1x) of NAND writes between 100 and 97. Assuming the V2-40 has 34nm 5000 cycle NAND and the F40-A has 25nm 3000 cycle NAND, it works out pretty well.
At this rate, it seems the F40-A is only 2 days away from LTT activating (if it's set to 1 year lifetime).:eh:
05: 0
B1: 6
E7: 97%
E9: 9472
EA/F1: 20096
22.26 hours
19.5930 TiB
58.23 MB/s avg
MD5 Ok
What are some indicators of LTT? Slow write speed?
Has anyone doing these tests ever achieved the point where they could not write to the drive at all?
I have another one of these drives (virgin) ready to go again if you guys want to see another test with more controlled parameters. :)
The write amplification is calculated as the ratio of sa177 raw * flash-capacity / writes-to-SSD. This of course assumes that sa177 raw is counting the average number of erase cycles of the flash in the SSD.
I'm not surprised that TRIM helped write speed and reduced WA. That is exactly what TRIM is supposed to do. By increasing the number of invalid blocks for GC to work with, performance is increased and write amplification is reduced since collecting invalid pages is more efficient when there is more "scratch" space. That is almost exactly the same thing as increasing over-provisioning in order to increase lifetime and help performance.
johnw, your reallocations are accelerating :yepp:
Attachment 118970
(disparity between the bottom of the pack and your reallocations made me switch to a logarithmic scale for that axis....and the line still appears nearly linear with a slope greater than 0, meaning acceleration)
How do you know the reallocations? Is that C4?
I don't think a program automatically directly follows an erase. My understanding is that "program" is a page-operation (basically a write to a page), and it can only be done on a page in a block that has been erased (cannot re-write or re-program a page). So it would make no sense to program the pages in a block after erasing the block, unless the SSD had actual data to write to the pages.
I would have tried it with TRIM disabled if we had thought of it a couple hundred TiB ago, but now I think the Samsung 470 is in deterioration (with sa178 moving quickly) and I do not want to disturb the conditions of the experiment now. Maybe someone else with a Samsung 470 can try that experiment.
As for flash write speed. The writes can be interleaved, possibly up to 5 per channel, but I think that requires more die than channels. For example, if there are 8 channels and 32 die, the writes can be interleaved 4 times, effectively increasing the write speed 4 times over the number you computed there. I think this may only be possible with synchronous flash, but I am not certain that it cannot be done with async flash (although async flash is slower at writes than sync flash, so there must be a reason for that).
Even with interleaving at 5 times, it does not explain 250-300 MB/s write speeds that can be achieved on 240-256GB SSDs using 8GiB flash die. There must be additional tricks beyond interleaving to increase the write speed.
Might as well...was holding off until reallocations of a drive (any drive) climbed, but no sense having that awkward dead space on the bottom when it can be future-useful and not awkward up top :)
Attachment 118971
05 and C4 raw values for your F40-A, I believe
I may be wrong, but I think that's the only indicator on an SF-1200 drive.