Yes, some more SF and the SF2 series in particular and some sort of HDD would be interesting as well :)
Printable View
Yes, some more SF and the SF2 series in particular and some sort of HDD would be interesting as well :)
The SF2xxx drives are also lifetime write throttled. It will not allow PE cycles to exceed the warranty life time.
The only way to run an accelerated wear out test on a SF drive is to find one that has not had the throttling option set by a vendor.
^^ Come on Zads...I know you can help out here! :D
Really need a way to remove throttling on SF first. If only I knew the parameters to specify for their FORMAT command.
183TB. 5%. 18 reallocated sectors.
SMART attribute 181 (B5)
Below is some clarification on non aligned reads/ writes:
"The NAND page size is handled in firmware, before the SMART calculation is performed. Firmware has internal counters for total unaligned writes and total unaligned reads (regardless of page size). These counters are added together, divided by 60,000 to get the raw value.
It should be noted that this number is not related to sector size, but rather to NAND page size (though I suppose that depends on how you define “sector”). The counters are incremented in one of two cases: 1) the write starts somewhere besides a page boundary, or 2) the write from the ATA command is not the length of the physical page (be it 4k or 8k). The math is the same, regardless of page size. This is because for the 8k NAND on the 256GB and 512GB, we simply take 2 4k chunks from the host at a time, and write them to the 8k page. If they are misaligned for 4k, they’ll be misaligned to the same degree for 8k."
On the Intel 320 you can track what appears to be a reverse MWI if someone knows how to access these logs listed in the Snip.
I have no idea if this can be done on the X25.
Just something I ran across.
Attachment 117369
151 hours, 43,9010 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 77 to 75.
Avg speed for all 151 hours is roughly 84,7 MiB
There're no indication of the speed decreasing. The M4 looks very good so far and with 3/4 of the journey left there is no sign of reallocated sectors.
Attachment 117371
Attachment 117372
This thread kind of confirms what I have been saying all along :
-avoid Samsung and Sandforce if you can get Intel or Crucial at the same price
-avoid 25nm if you can get 34nm at same price
Really eager to see a dead SSD folks !
None of those SSD's are limiting normal users, in any way :)
I'm not saying that just because I've got a few of both of those two you mentioned :D
This thread does not confirm anything of the kind. Where do you come up with this crazy stuff?
Well that is my personal opinion on this kind of stuff so I though I would share it here.
So far, the Samsung is seemingly less endurant than the Intel and the Sandforce is getting warranty throttled.
Also with the same controller and same firmware 25nm NAND will never be better than 34nm NAND ( it's just physics ).
I can understand the first statement - that SF and Samsung is not on the same level as Intel or Crucial. Either because the endurance is pretty low (Samsung) or for other reasons (Sandforce). However, the endurance of the Samsung drive is still very high and should not be a problem for most of the users. Although I will never support SandForce while they throttle their devices and advertise unachievable performance, it's still not a problem for most of the users as you really have to stress your drive for a long period of time.
And the second statement may be true in theory, but how on earth do you conclude such a thing from this thread? The tests (so far) clearly show that the 25nm Intel 320 has roughly the same endurance as the 34nm G2. Yes, it's not the same drive, but I still don't quite understand how you come to this conclusion...
If it is your personal opinion, then why not state it as your personal opinion rather than claiming the data in this thread confirms it?
Since the Samsung has not failed yet, there is no data in this thread to confirm that the Samsung is "less endurant".
And for the same controller and firmware, 25nm flash COULD be better than 34nm flash. It is not just physics. There is engineering and manufacturing involved, too. The 25nm chips could simply be higher quality (for various reasons) than the 34nm chips. We don't know. Once the SSDs start failing, then we will have some data.
Updated charts :)
C300 isn't in the MWI Exhaustion graphs because it's just too soon...MWI is down to just 97, very hard to extrapolate with accuracy off of that.
Host Writes So Far
Attachment 117383
(bars with a border = testing stopped/completed)
Raw data graphs
Writes vs. Wear:
Attachment 117384
MWI Exhaustion:
Attachment 117385
Normalized data graphs
The SSDs are not all the same size, these charts normalize for 25GiB of onboard NAND.
Writes vs. Wear:
Attachment 117386
MWI Exhaustion:
Attachment 117387
Write-days data graphs
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Writes vs. Wear:
Attachment 117388
MWI Exhaustion:
Attachment 117389
vapor:
One graph I'd like to see would be average block erase count vs. TiB written. For the Samsung, I think we agree that attribute 177 is probably the average block erase count. For Micron, 0xAD is the "average erase count of all good blocks". On the Intel 320, I think 0xE9 attribute raw value may be the average block erase count ("the number of cycles the NAND media has undergone").
I can do that :)
Will be tough to back track on the Intel 320, unless One_Hertz has that info handy. I have been logging it for your Samsung and the two Crucial SSDs, which is good.
One caution with the graph, however....it will just be an upside-down MWI vs. TiB graph. The C300 and your Samsung move the MWI in increments of 50raw (though the Samsung doesn't really stick to it perfectly) and the m4's moves in increments of 30raw. Still, it will be useful for post-MWI Exhaustion wear tracking :)
Samsung seems capable of nearly 10x the cycle usage speed of C300...which had me thinking. Considering one of the values will always be dwarfed on the chart (at least until the Samsung dies), logarithmic or linear axis scale?
Attachment 117390
Attachment 117391
EDIT: theoretically, these values are available on SandForce via SMART attribute 233, just divide by NAND size. At least one new SandForce will be entering in the coming weeks :)
EDIT2: a lack of early-life Samsung data is why it's 'better' than the Crucials early in life on the logarithmic chart...Excel isent to brite :p: If logarithmic scale axis is the choice--should I add interpolated data?
i hate to:horse: but i have to jump in here as well. there is NO correlation between this statement and any results contained in this thread thus far. simply not true, or if it is, that result hasnt been borne out yet.Quote:
-avoid 25nm if you can get 34nm at same price
CMON guys, first to 1/2 PB is the winner :)
Hi John, when you were waiting for the MDF5 update did you leave the drive on idle? Maybe that period allowed static data rotation to revitalise the NAND reserve? Seems a bit strange that it is still going strong 9TB after getting to MWI 1. Even if the NAND specs are based on minimum PE cycles, that still seems quite a lot of data :shrug:
I don't think the OCZ data should be extrapolated the way it is presented in the graphs. The testing stopped because it was completely throttled. In the eyes of the users on this forum, that would be considered a failure.
I think you should remove the dotted trendline from OCZ data.
And there was the first week of testing over. To sum it up so far:
- The M4 have written 49,0105 TiB.
- (AD)Wear leveling count has gone from 100 to 72.
- Avg speed 84.97 MiB/s
- 168 hours finished
Otherwise everything looks normal, the speed is good and there has not been any reallocated sectors yet.
Attachment 117406
Attachment 117405
186.5TiB. 3%. Reallocated sectors up to 19.
It managed to write more than most people are likely to write during 3 years of rather heavy usage, (8TiB per year is way above normal usage)
The dotted line shows interesting data as it shows what could have been, if it wasn't for the throttling.
So, yes, it failed to complete the Endurance test, it did not fail as a drive though.
I'd say it might go on for a few more weeks :) depending on how wear leveling is implemented.
3000 P/E cycles is about 190TB. (best case)
--
141.55TB Host writes
MWI 22 (early)
Reallocated sectors, still at 6.
During the last 24.5 hours it managed an average of 32.13MiB/s writes + 9 rounds of MD5 testing a ~1.7GB file.
So far what SSD would you guys say is the "best" and what would you recommend people buy ???
Wear vs. TiB might still stick to that line, even with LTT (and my guess is that it would stick pretty closely to that line). The tester chose to stop testing but the drive was still functioning as intended, even if it were not functioning as desired.
Wear vs. Write-days extrapolation would change dramatically (for the 'better') as the drive would now be forced to last 1000+ days rather than the 55-60 it could wear itself down in without LTT. If anything, I feel I should add another line indicating actual life expectancy in days...but that would mess up the visibility of the charted data from every other drive :p:
That said, I expect to remove the data from the Vertex 2 40GB when a SF-1200 drive without LTT enters testing (partially because testing on the V2-40GB is permanently incomplete and partially because of the initial 0-fill numbers complicating things), but until then it'll be on the chart because there is no alternative/better SF-1200 data.
C300 Update:
15.0234TiB, 95MWI, 251raw, 0 reallocated, 62.12MiB/s, 42/0 with MD5 checks (1.65GiB file)
Attachment 117411
Updated charts :)
First time the C300 is in the MWIE charts...MWI reached 95 with a raw of 251, so it was early in the cycle and is, hopefully, good for extrapolating.
Host Writes So Far
Attachment 117412
(bars with a border = testing stopped/completed)
Raw data graphs
Writes vs. Wear:
Attachment 117413
MWI Exhaustion:
Attachment 117414
Writes vs. NAND Cycles:
Attachment 117415
Normalized data graphs
The SSDs are not all the same size, these charts normalize for 25GiB of onboard NAND.
Writes vs. Wear:
Attachment 117416
MWI Exhaustion:
Attachment 117417
Write-days data graphs
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Writes vs. Wear:
Attachment 117418
MWI Exhaustion:
Attachment 117419
Finally some figures for the C300 :)
So, 300TiB, should be great fun!
On another note, according to the "formula", a 128GB C300 with 34nm NAND should last ( 5000 * 128 ) or 640 000 GB of writes or 640 TB of writes ! The 64GB version of the C300 should last about half of that so about 320 TB etc. ???
Also you guys might wanna include secure erase endurance testing here as well. How many times can a SSD be secure erased ??? Might be a good point of investigation.
300TiB is indeed a lot :p:
But it still won't last as long, in days, as your X25-V.
That assumes a write amplification of 1.00x, which is a tough assumption without some sort of compression (or totally erratic performance). It does appear that the Crucials do have a nice, low WA though :)
The Crucial will likely have 34nm nand that is spec'ed at 10K P/E cycles.
If anyone testing can confirm the nand product being used in their SSD I can try to find the manufactures specs. The V2 had Intel 34nm nand.
I had it on for a few hours without running Anvil's app, but then I powered the computer (and SSD) down for the night, and did not power-on again until Anvil's new app was ready the next day.
I'm not surprised that it can continue going after more than 6000 erase cycles. Between possibly conservative specs for the erase cycles of the flash, and the spare-flash the controller reserves, I would not be surprised if it tops 10,000 erase cycles, or even more. But who knows? Wait for the data! We shall see...
Evening update:
177.5 hours, 51.8789 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 72 to 71.
Avg speed for all 177.5 hours is roughly is 85.12 MiB/s
Attachment 117425
Really curious how much better the C300 will end up being compared to the M4
84.218 TiB, 240 hours, sa177: 1/1/6996
The other two unknown SMART attributes, 178 and 235, are still at 72/72/276 and 99/99/2, just as they were when the SSD was fresh out of the box.
Attachment 117440
from the reading so far, it seems to me that the average SSD should be able to write at least 2500x its capacity (ignoring the small sample size here, where drives here are hitting ~4000+x their capacities). it makes me wonder how much a conventional consumer level HDD can handle under the same kinds of workloads.
I'd love to see what kind of per capacity comparison consumer HDDs have (from conservative estimates- a 1TB HDD should be able to write AT LEAST 2.5PB to match up with what SSDs can handle) and it seems to me that the app that's being used for testing on the SSDs would be a program of choice to test that...if anyone else is interested to see (as I'm not sure if normal HDDs have any sort of SMART attribute that monitors writes)
Here is a picture under the hood on my M4.
Attachment 117446
There are 8 of these on the pcb.
Morning update:
188 hours, 55.0254 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 71 to 69.
Avg speed for all 177.5 hours is roughly is 85.25 MiB/s
Attachment 117447
I would also like to see such a test. But unfortunately, we would not be able to do a fair compare. A 3 plates 1TB could write 2.5PiB in around 282 days (assuming 110MiB/s speed) . Now if you consider only random writes, at a blazing fast 1MiB/s speed, you get 85 years. Considering same amount of data written, there would be three possible scenarios to compare:
- sequential write (probably HDD>SSD)
- random write (SSD>HDD due to time needed to write same quantity)
- error rate and data retention
Average load scenarios are a combination of sequential/random write. What would probably make the difference will be the error rate and data retention.
144.04TB Host writes
MWI 21
Reallocated sector count, still at 6.
Avg speed 33.26MiB/s (over a 15.7 hour period), MD5 6/0.
--
One can do a test of an HDD, it won't be a compare though.
It would still tell for how long (that particular) HDD would handle the stress of this test, it might go on for years or it might fail in a few weeks, as the HDD has moving parts there are other factors that could trigger a failure.
I'd say it's an interesting test to do.
I'm running a short test as we speak on an F3, looks like 8-10TiB/day is doable on an empty drive, mine is half full and avg speed is 98-99MiB/s. (one loop is set to ~12GiB of writes)
I believe your NAND is rated for 3,000PE cycles. The OCZ Vertex 2 used Intel NAND 29F32G08AAMDB.
Looking at info I could find on the web....
MICRON 25nm 3,000 PE cycles. 4K page file. Block size 1,024K
• MT29F32G08CBACA
• MT29F64G08CEACA
• MT29F64G08CFACA
• MT29F128G08CXACA
• MT29F64G08CECCB
• MT29F64G08CFACB
MICRON 34nm 5,000 PE cycles. 4K page file. Block size 1,024K
• MT29F32G08CBABA
• MT29F64G08C[E/F]ABA
• MT29F128G08C[J/K/M]ABA
• MT29F256G08CUABA
• MT29F32G08CBABB
• MT29F32G08CBCBB
• MT29F64G08CFABB
• MT29F64G08CECBB
• MT29F128G08CJABB
• MT29F128G08C[K/M]CBB
• MT29F256G08CUCBB
MICRON 34nm 10,000 PE cycles. 4K page file. Block size 512K
• MT29F16G08MAA
• MT29F32G08QAA
• MT29F64G08TAA
Intel 34nm 5,000 PE cycles. 4K page file, Block size 1,024K
• (JS)29F32G08AAMDB
• ( JS)29F64G08CAMDB
• ( JS)29F16B08JAMDB
Attachment 117454
V2 (presuming Vertex 2?) also had 25nm versions which resulted in customer complaints over no indication of it as there was no way to differentiate the two V2 samples.
Looking at those PE cycle ratings, 200TB for a 64GB, even if 3,000 PE cycle NAND were used is extremely likely. The 3,000 are minimal or average ratings, but it should be able to handle more. Consider how much it could do fully sequential.
As for the HDD test, Google showed some stats about its SATA HDD usage, error rates and failure rates.
190.5TB. 1%. Still 19 sectors.
I've reached the magical point except the SSD clearly doesn't care.
great work, amazing! now to see how much longer it goes :clap:Quote:
I've reached the magical point except the SSD clearly doesn't care
just wanna thank you guys again for this important testing, your money, and more importantly your time, spent on this is greatly appreciated!
Thanks Ao1. Then there is no doubt that my M4 is rated for 3000 P/E cycles :up:
Evening update:
203 hours, 59.5699TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 69 to 66.
Avg speed for all 203 hours is roughly is 85.47 MiB/s
Attachment 117467
C300 Update, charts next post :)
21.43TiB, 93 MWI, 358 raw, 0 reallocated, 62.15MiB/s
Attachment 117466
Updated charts :)
One_Hertz, is there a raw wear indicator for the 320? Hate to think that the only graph it'll be left participating on is just the Host Writes So Far bar graph :eh:
Host Writes So Far
Attachment 117468
(bars with a border = testing stopped/completed)
Raw data graphs
Writes vs. Wear:
Attachment 117469
MWI Exhaustion:
Attachment 117470
Writes vs. NAND Cycles:
Attachment 117471
Normalized data graphs
The SSDs are not all the same size, these charts normalize for 25GiB of onboard NAND.
Writes vs. Wear:
Attachment 117472
MWI Exhaustion:
Attachment 117473
Write-days data graphs
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Writes vs. Wear:
Attachment 117474
MWI Exhaustion:
Attachment 117475
Well, there is the reallocated blocks vs. TiB written. Unfortunately, the Samsung cannot participate in that (unless one of the two unknown SMART attributes turns out to track that number, but if it did, I would have expected some change in those attributes before now)
SSD seem really durable these days, people on other sites are right you can use a SSD like normal hard drive. They said that the drive will break before the the rewrite limit has been reached.
So does this mean doing all the stuff to limit SSD writes is a waste of time?
Also good work from all the people working on this experiment.:up:
The 320 can report another value for "wearout"
Look at post #799, could be that I've already been there as I've done a few extra tests on the 320 Series, will check when I get back home later tonight.
Morning update:
214 hours, 62.8925TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 66 to 64.
Avg speed for all 214 hours is roughly is 85.6 MiB/s
Attachment 117496
Too bad the Samsung and the Crucial have crippled SMART data. Intel is still the best at this point in time. Hoping they get their act together and develop their own 6gbps controller with 34nm NAND and best for 4K random read / write ( only few care about sequential ) !!!
Eveningupdate:
225,5 hours, 66.3894 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 64 to 62.
Avg speed for all 214 hours is roughly is 85,75 MiB/s
Attachment 117512
Morning update:
237 hours, 69,87084 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 62 to 60.
Avg speed for all 237 hours is roughly is 85,87 MiB/s
Attachment 117533
Hi Vapor, any chance of showing the writes that have occurred following notification that the MWI is exhausted? Maybe a hatched extension on the bar in the MWI Exhaustion graph?
I'm surprised at how much the Samsung 470 has been able to write after MWI exhaustion. At this rate it will be able to double the amount of data it took to exhaust the MWI.
Is anyone else going to getting a SF2xxx drive to test? If not I might pick one up. One "hacked" and one with throttling enabled would be interesting.
I'll find out on mine as well, just need to get an opportunity to power down for a few minutes.
+
^ look in your PM.
:)
I'll try to get a special build for you later today, in order to find that special SMART attribute for reporting wearout.
I've figured out the Host writes on the 320 series using totally undocumented vendor specific info returned by WMI.
--
149.40TB Host writes
MWI 18
Reallocated sectors : 6
MD5, all tests were OK.
@Vapor
A P/E count chart would be useful in general. (TiB written/capacity)
Is nobody going to take out my offer and start testing endurance in terms of secure erases. This is just like writing the whole capacity of the SSD in a couple of seconds as it "zaps" the SSD NAND cells and resets them to 0. Anyone interested so we can see how durable this mechanism is and how many times it can be secure erased before it fails ??? Maybe an automated hdparm script or something. Anyone ???
This really is a valid point that also needs to be tested if we are talking about SSD endurance.
Or maybe test it on the V2 drive so we can see how many secure erases it can take if the standard endurance test failed because of throttling etc. ???
^^
You need to repower the drive every time you do a secure erase. Nobody is going to sit there and do it.
That was the intention when I first made it, thanks for reminding me :up:
Anvil, do you mean a bar chart with P/E cycles? Or a bar chart with normalized writes? Unfortunately, only the Crucials, the Samsung, and the SandForce show anything directly related to NAND writes (and therefore P/E cycles).
C300 update from earlier today, didn't have a chance to post.
29.971TiB, 90 MWI, 505 P/E cycles, 61.8MiB/s, ~240/0 MD5 runs/mismatches
Attachment 117544
Eveningupdate:
248,5 hours, 72,6775 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 60 to 59.
Avg speed for all 248,5 hours is roughly is 85,18 MiB/s (avg has gone down some due to 30 min of win update)
Attachment 117546
No, I meant "Host writes" / Capacity, which can be used by all* drives and it should be pretty close to the P/E count for the Intels.
(all drives if using the running total option)
We will need something as the counters stops telling what's going on and that time has come for the 320, unless we find some way to get to the other wear-out counter, it would still leave the X25 series out in the cold as there is no extra wear-out counter.
It's not much but it's something.
110.000 TiB, 309 hours, sa177: 1/1/9123
The other two unknown SMART attributes, 178 and 235, are still at 72/72/276 and 99/99/2, just as they were when the SSD was fresh out of the box.
110 TiB comes to 66.2 GB per day for 5 years. So the Samsung 470 64GB SSD has passed the milestone where you could have written the entire available capacity of the SSD, 64GB, each and every day for 5 years.
Attachment 117554
The 60GB Corsair Force GT is $145 on newegg ($155 - $10 for promo code HARDOCPX7X6D until 7/12)
http://www.newegg.com/Product/Produc...82E16820233193
261 hours, 76,4858 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 59 to 57.
Avg speed for all 261 hours is roughly is 85,35 MiB/s
Attachment 117566
198TB. 20 reallocated sectors. Md5 OK. Anvil - I will try your new app tonight to see if I can get that SMART attribute.
There was something wrong with xs last night so here are the numbers from yesterday evening:
273 hours, 80,1158 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 57 to 54.
Avg speed for all 273 hours is roughly is 85,47 MiB/s
This mornings numbers are:
283 hours, 83,1993 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 54 to 53.
Avg speed for all 283 hours is roughly is 85,63 MiB/s
Attachment 117586
It seems like the M4 and the 320 are close to cracking the 200TB barrier. Wonder how the C300 will perform.
The 320 should be > 200TB, not sure of how you could have misread the TB for the m4? (it's currently at ~84TiB)
--
154.72TB Host writes
MWI 15
Reallocated sectors still at 6
MD5, no errors.
It looks like speed is a bit up for this session, 33.8MB/s on avg. (~35 hours)
@B.A.T. : it seems like your avg write speed is going up! That's a good feature to have: more you write on me, faster I become...:D
It's only going up because the test was haltet for a couple of hours. From my next update I'll be using the latest v of anvils app and there the avg speed is calculated for me. At the moment the avg speed is 88.38 MiB/s
C300 Update from yesterday when XS was down:
36.89TiB, 61.8MiB/sec since switching to the latest Anvil App, 88MWI, 620 P/E cycles (SMART raw wear).
Attachment 117607
Going to change the "normalized writes" charts to Anvil's suggestion (just a change of units and an addition of a new "Writes So Far" chart).
Updated charts :)
Host Writes So Far
Attachment 117617
Attachment 117618
(bars with a border = testing stopped/completed)
Raw data graphs
Writes vs. Wear:
Attachment 117619
MWI Exhaustion:
Attachment 117620
Writes vs. NAND Cycles:
Attachment 117621
Normalized data graphs
The SSDs are not all the same size, these charts normalize for total NAND capacity.
Writes vs. Wear:
Attachment 117622
MWI Exhaustion:
Attachment 117623
Write-days data graphs
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Writes vs. Wear:
Attachment 117624
MWI Exhaustion:
Attachment 117625
You want this chart:
http://www.xtremesystems.org/forums/...0&d=1310576251
Evening update.
I updated to anvils latest ver. and have started using md5 check.
296 hours, 87,1098 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 53 to 50.
Avg speed reported from anvils app is 88.38 MiB/s
MD5, no errors.
Attachment 117633
Attachment 117634
I'll be impressed if the Samsung 470 makes it past 200TiB. I'm far from certain, but my best guess is that the write amplification of the Samsung is around 5 -- much higher than the other SSDs in the test. If the Samsung makes it to 200TiB, I guess that would be about 16,000 erase cycles on the flash.
We'll just have to wait, I'm already impressed by the Samsung.
The Samsung is a bit of a dark horse as we don't know much about the drive at all.
C300 evening update, plus a new chart to add to the mix.
41.787TiB, 86MWI, 702 P/E cycles (raw wear indicator), all MD5 okay, 81.7MiB/sec
Attachment 117638
New chart!
Approximate Write Amplification
Attachment 117639
Using the normalized writes (total GiB written / total NAND capacity), raw wear indicators (Samsung and Crucials), and raw wear inferences for Intel ( (100-MWI) x 50 / normalized writes ), I came up with a chart for approximate write amplification during testing. Not sure how useful it is, ultimately, but it seems interesting :p:
Only value I'm unsure of is the X25-V, it may have 10k cycle NAND, in which case it's WA would be 2x what is shown as I calculated with the assumption of 5k cycle NAND.
Very interesting data, especially the Samsung as they are probably using different methods of production compared to onfi people like Intel micron but numbers like 16 000 seem pretty insane!