Just for fun.
Attachment 118579Attachment 118578
Nice result after 207 TiB is written :)
Printable View
Just for fun.
Attachment 118579Attachment 118578
Nice result after 207 TiB is written :)
^^Makes me feel good about buying 2 for my RAID0 :)
@B.A.T
Looks like it's brand new :)
--
209.85TB Host writes
Reallocated sectors : 6
MD5 OK
36.47MiB/s on avg (12 hours)
271TiB. 39 reallocated sectors. The value which I believe to be the wear indicator is 135.
Please try and see if you can get TRIM working on the Indilinx drive otherwise this test is not really life like.
I think 2 in raid 0 is a killer combo. m4 is looking very impressive and they are cheap :)
Yeah. It's not much difference from my first run with AS SSD.
Of course it is life like, Intel recommends running Intel SSD Toolbox once every week, which would equal every 10-140GB depending on your usage pattern.
TRIM will not clean all "deleted" data even if you are running an OS supporting TRIM. (this is why it is recommended running the toolbox)
Not sure how many GB is written between each run of "wiper" but it is surely much more than the recommended 10-140GB, a lot more :), an average of 40MiB/s equals 1 week of writes written in just 1 hour.
So if he cleans the drive once per day it would equal (a minimum) of 24weeks of writes, so, he can in fact run "wiper" once every hour and still be within the norm.
m4 update:
213.7184 TiB
711 hours
Avg speed 90.42 MiB/s.
AD gone from 236 to 232.
P/E 3733.
MD5 OK.
Still no reallocated sectors
Attachment 118602
M225->Vertex Turbo 64GB Update:
17.60 TiB
119.74 hours
MWI 86 (drops by 1 for every 50 raw wear)
703 Raw Wear
36.24 MiB/s avg for the last 24.02 hours
No MD5 Testing yet
Also, C3 still at 2 and CE still at 235, same as last update.
Attachment 118603
321.708 TiB, 876 hours, sa177: 1/1/26383, sa178: 70/70/296
Average speed reported by Anvil's app has been steady at about 113MB/s.
The other unknown SMART attribute 235 is still at 99/99/2, just as it was when the SSD was fresh out of the box.
64GB Samsung 470
sa178 raw increase of 4 this time -- still have not seen an increase by 1 (or any other odd number). It looks like the normalized value decreases by 1 for about a 10 increase in the raw value. So the raw value should be roughly 1000 when the normalized value reaches 1. If that is 1000 erase blocks of 512KiB each, then we are looking at roughly 512MiB of reallocated flash, or roughly 0.8% of 64GiB on board. Seems plausible to reserve a little less than 1% of flash for reallocated blocks.
C300 Update
148.55TiB, 50MWI, 2511 raw wear, 2048/1 reallocation, MD5 OK
SF-1200 60GB Update
25TiB host writes, 14.25TiB NAND writes, 96 MWI, 228 raw wear, wear range delta 4, no reallocations.
Charts Updated :)
All bar charts are sorted by their respective equivalent of Writes So Far. SF-1200 MWI Exhaustion expectation is overly optimistic due to me still running compression tests (and 0-fill was redone after it went below 100MWI)....it will probably always be optimistic until MWI does deplete. SF-1200 observed WA is also optimistic, for the same reasons.
Host Writes So Far
Attachment 118613
Attachment 118614
Normalized Writes So Far
The SSDs are not all the same size, these charts normalize for available NAND capacity.
Attachment 118615
Attachment 118616
Write Days So Far
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Attachment 118617
Attachment 118618
Host Writes vs. NAND Writes and Write Amplication
Based on reported or calculated NAND cycles from wear SMART values divided by total writes.
Attachment 118619
Attachment 118620
Attachment 118621
So, 3 drives have made it past 5000 "P/E" cycles. (Host writes/NAND "size")
1x 34nm drive, 1x 32nm drive and 1x 25nm drive.
--
Kingston SSDNow 40GB (X25-V)
212.62TB Host writes
Reallocated sectors : 6
MD5 OK
m4 update:
220.3349 TiB
732hours
Avg speed 89.74 MiB/s.
AD gone from 232 to 228.
P/E 3850.
MD5 OK.
Still no reallocated sectors
Attachment 118639
Not really but I have found a wiper that does not require any user input/interaction, could you try downloading and running just to see if it works.
Link
For this to work flawlessly, wiper would have to exit once the operation is done, could you check that as well.
330.672 TiB, 899 hours, sa177: 1/1/27126, sa178: 70/70/302
Average speed reported by Anvil's app has been steady at about 113MB/s.
The other unknown SMART attribute 235 is still at 99/99/2, just as it was when the SSD was fresh out of the box.
64GB Samsung 470
By the way, I checked the SMART attributes last night, at 884 hours, and sa178 raw was at 300.
M225->Vertex Turbo 64GB Update:
20.31 TiB
138.91 hours
MWI 84 (drops by 1 for every 50 raw wear)
815 Raw Wear
??? MiB/s avg for the last ??? hours
MD5 OK
Also, C3 still at 2 and CE still at 235, same as last update.
Had a system crash this morning while running the app so I lost my MiB/s avg.
Attachment 118644
The real winner will be seen after all the SSDs have died, I think !
C300 Update
153.9TiB, 48MWI, 2602 raw wear, 2048/1 reallocations, MD5 OK
SF-1200 nLTT 60GB Update
32.5TiB host writes, 17.594TiB NAND writes, 94 MWI, wear range delta of 5, 281.5 raw wear (equiv).
215.85TB Host writes
Reallocated sectors : 6
33.83MiB/s on avg (28 hours)
M225->Vertex Turbo 64GB Update:
23.86 TiB
MWI 82 (drops by 1 for every 50 raw wear)
945 Raw Wear
MD5 OK
Attachment 118667
ok...so who bets on whether these things do a PB or not?
i bet one does. :) My moneys on the C300.
god something needs to explode or something :)
Most probably at least half of them will hit 1PB limit if there is no controller failure. The problem is to find out how usable are after this mark. That would mean keeping them in storage for 3-6 months and check if data is still there. But if doing so, we would only know that the drives are capable of at least 1PB writes, without knowing any upper limit.
And if I would bet, my money in the 1PB race are on Samsung model... I only need to wait another 78-80 days :) (less time, less chance of total failure)
I would not bet on it. I am becoming increasingly convinced that the write amplification on the Samsung is indeed about 5. And I guess (not convinced, but suspect) that smart attribute 178 normalized is the percentage of blocks left to be used for reallocation. When the pool of blocks available for reallocation is exhausted, the SSD should start having problems fast. If my guess for sa178 is correct, then the Samsung is on a countdown to write death, 72, 71, 70, 69....
I noticed the death countdown... but also this I suspect is just the pool of blocks for reallocation, not the total spare ones. Once this pool will be empty, it will probably start using the blocks from over provisioning which are a lot more at probably the cost of slower write speed. So if bad block count does not start to increase exponentially, then we will have a winner.
358.820 TiB, 974 hours, sa177: 1/1/29409, sa178: 68/68/324
Average speed reported by Anvil's app has been steady at about 113MB/s.
The other unknown SMART attribute 235 is still at 99/99/2, just as it was when the SSD was fresh out of the box.
64GB Samsung 470
Big change in sa178 this time.
m4 update
245.2509 TiB
813hours
Avg speed 89.29 MiB/s.
AD gone from 228 to 213.
P/E 4290.
MD5 OK.
Still no reallocated sectors
Attachment 118728
Just getting caught up, was AFK for the past few days. Good to see I didn't miss any drive failures :p:
Compression testing on the SF-1200 ended early Saturday so that means I started on the endurance testing, and have a few observations from that:
1) Speed with an empty SF-1200 drive with the 46% setting was roughly ~61MiB/sec. With a drive with 34GiB taken up, it's down to ~56.2MiB/sec (24hr average). MD5 checks (every 10 loops) take up part of that, surely, but a 10% drop is bigger than I expected.
2) Wear range delta on the SF-1200 has gone down since endurance testing started (which is odd with static data, no?). It started at 8, was at 6 last night, and is now at 3. :confused:
Anyway, here's the strings of updates.
C300 64GB
156.745TiB host writes, 47MWI, 2651 raw wear, 2048/1 reallocations, MD5 OK
159.85TiB host writes, 46MWI, 2703 raw wear, 2048/1 reallocations, MD5 OK
163.42TiB host writes, 45MWI, 2763 raw wear, 2048/1 reallocations, MD5 OK
169.71TiB host writes, 43MWI, 2860 raw wear, 2048/1 reallocations, MD5 OK, 62.6MiB/sec (which is up by .7MiB/sec over when there was no SF-1200 onboard, odd).
SF-1200 nLTT 60GB
36.125TiB host writes, 19.125TiB NAND writes, 94 MWI, 6 wear range delta.
40.3125TiB host writes, 19.8438TiB NAND writes, 93 MWI, 7 wear range delta.
42.875TiB host writes, 21.719TiB NAND writes, 92 MWI, 8 wear range delta. Real endurance testing with static data, MD5 checks, and such started after this point.
46.813TiB host writes, 24.781TiB NAND writes, 91 MWI, 6 wear range delta, MD5 OK
48.125TiB host writes, 25.781TiB NAND writes, 90 MWI, 3 wear range delta, 56.2MiB/sec, MD5 OK.
(been out of town for the weekend, test was off for about 36 hours)
220.19TB Host writes
Reallocated sectors : 6
MD5 OK
M225->Vertex Turbo 64GB Update:
33.04 TiB
223 hours
MWI 74 (drops by 1 for every 50 raw wear)
1311 Raw Wear
40.52 MiB/s avg for the last 18.4 hours
MD5 OK
Also, C3 still at 2 and CE still at 235, same as last update.
Attachment 118748
EDIT: Found a new/old Indilinx Tool. Here is some info from it.
per-CE Count Info:
Attachment 118749
Bad Block List"
Attachment 118750
Erase Count List (too large to show, so here is summary):
BANK = Total (4096 BLK)
0 = 5348386
1 = 5446898
2 = 5408294
3 = 5380654
4 = 5444565
5 = 5428503
6 = 5492693
7 = 5431371
368.289 TiB, 999 hours, sa177: 1/1/30182, sa178: 67/67/328
Average speed reported by Anvil's app has been steady at about 113MB/s.
The other unknown SMART attribute 235 is still at 99/99/2, just as it was when the SSD was fresh out of the box.
64GB Samsung 470
If sa177 is average number of erase cycles for the flash, then the Samsung has just passed the rated cycles for a common type of Micron eMLC flash (30,000). Maybe the flash deterioration will accelerate soon.
m4 update:
252.4310 TiB
836hours
Avg speed 89.40 MiB/s.
AD gone from 213 to 209.
P/E 4416.
MD5 OK.
Still no reallocated sectors
Attachment 118756
C300 Update
174.435TiB host writes, 41 MWI, 2950 raw wear, 2048/1 reallocations, MD5 OK
SF-1200 nLTT Update
52.625TiB host writes, 29.563TiB NAND writes, 88 MWI, 473 raw wear (equiv), MD5 OK
222.75TB Host writes
Reallocated sectors : 6
MD5 OK
32.53MiB/s on avg (35 hours)
M225->Vertex Turbo 64GB Update:
36.96 TiB
246 hours
MWI 71 (drops by 1 for every 50 raw wear)
1451 Raw Wear
43.93 MiB/s avg for the last 17 hours
MD5 OK
Attachment 118764
^ Incredible, especially at those speeds. By coincidence that is x100 the amount of data that I have written to my main SSD after a couple of years worth of use. :cool:
I wonder what the Sammy bases the MWI on? For the other SSD's it appears to be the theoretical minimum capability of NAND P/E cycles. There must be a good reason why the MWI in general appears to be way off for the other drives. Perhaps other thresholds are also considered, like the ability to retain static data over time once the MWI is getting close to depletion.
I think the Samsung does something similar with MWI to the other drives. It is just that the Samsung likely has WA of about 5, so it goes through the cycles a lot quicker than a drive with 1.1 WA.
m4 update:
259.9290 TiB
861hours
Avg speed 89.44 MiB/s.
AD gone from 209 to 205.
P/E 4547.
MD5 OK.
Still no reallocated sectors
Attachment 118771
C300 Update
179.364TiB host writes, 40 MWI, 3034 raw wear, 2048/1 reallocations, 61.9MiB/sec, MD5 OK. ~11TiB more and it will have the longest lasting MWI so far :)
SF-1200 nLTT Update
57.125TiB host writes, 33.094TiB NAND writes, 87 MWI, raw wear 529.5 (equiv), wear range delta still at 3, 56.1MiB/sec, MD5 OK.
I think new charts tomorrow, next M225->Vertex Turbo update should move it past the Vertex 2 40GB in normalized writes and should be the last order change for awhile (have to recolor a lot of things in the bar charts every time the order changes). Samsung 470's reallocation line is...impressive :p:
M225->Vertex Turbo 64GB Update:
40.28 TiB
268 hours
MWI 69 (drops by 1 for every 50 raw wear)
1579 Raw Wear
34.55 MiB/s avg for the last 20 hours
MD5 OK
Attachment 118778
As the V2 unthrottled drive has now written around the same amount of data before the V3 became throttled I thought I'd show a comparison. (I don't want to start a debate on the issue, but thought it was worth showing the comparison).
• The V2 dropped to MWI 87% whilst the V3 only dropped to 92% to write the same amount of data
• On avg the V2 dropped a MWI point per 2,420GB of data
• On avg the V3 dropped a MWI point per 3,752GB of data
• Reduction in writes between the V3 & V2 per MWI = 35%
The V2 = 32nm Hynix
The V3 = 25nm Intel
I doubt the V3 controller is that much better at reducing wear or that Intel's 25nm is that much better than the 32nm Hynix, so I'd guess that the MWI is linked in some way to LTT and the V3 was reporting the MWI "incorrectly". By this I'd guess more P/E cycles were depleted than indicated on the MWI, but LTT had not had a chance to correct the situation and bring P/E back in line with the MWI.
Attachment 118785
Can someone explain what the attributes "CA-Total Count of Error bits from flash" and "CB-Total Count of Read Sectors with Corectable Bit Errors" are all about?
This white paper is a bit dated, but it explains why ECC is necessary and how it works.
http://www.imation.com/PageFiles/83/...hite-Paper.pdf
226.72TB Host writes
Reallocated sectors : 6
MD5 OK
34.28MiB/s on avg (33.45 hours)
And it just keeps going on. Just got to love'em :up:
m4 update:
266.7679 TiB
883hours
Avg speed 89.39 MiB/s.
AD gone from 205 to 201.
P/E 4668.
MD5 OK.
Still no reallocated sectors
Attachment 118792
Hello all
I am new here. :)
Running a Corsair Force 40 GB. The drive was completely virgin and is a slave drive obviously. Initial screenshots after NTFS format with 4096K allocation:
Attachment 118795Attachment 118796
Min GiB free set to 36 GiB.
so far:
51.27 GiB written
0.26 hours
37.41 MB/s average
Where do you get P/E cycles? Please let me know if you need anything else.
P.S. I didn't realize the MD5 check was built-in to the utility until after i ran it, I generated my own checksums manually, but I will enable it if I ever stop the utility (stopping and resuming after would be okay right?)
292.75TB. 48 reallocated sectors. MD5 OK. The value which I think is the wear indicator is at 146.
Ooo, a new drive :)
Need to dot some I's and cross some T's first though :)
What kind of static data do you have? Most of us (all of us?) are running with our drives ~60-70% full of static data and with minimum free space of 12GiB in Anvil's app (Anvil's app will fill the drive until only that much is free, then delete and refill). I see you're running 36GiB free, which leads me to believe 0 static data and a very short run lengths of ~1.25GiB (your drive has only ~37.25GiB user capacity, AFAIK).
What compression setting in Anvil's app? Anything but 0-fill (and probably 8%) is preferred for SandForce drives. Incompressible isn't necessary as SMART attribute E9 tracks NAND writes. Based on NAND writes, we can figure out P/E cycles; it isn't a SMART attribute with SandForce drives.
MD5 can be enabled at any time :)
I see that's actually a Force 40-A, but I don't know what that means for the 40GB variety but every other -A version is 25nm IMFT. I wasn't aware there were 40GB 25nm drives in the wild, could be interesting :)
And last but not least, are you okay with Lifetime Throttle (LTT) kicking in? It's likely that in a couple weeks your drive will slow to a crawl and will attempt to force the drive to last for a very long time (no idea what Corsair set this timespan for, however).
C300 Update
183.345TiB host writes, 38 MWI, 3101 raw wear, 2048/1 reallocations, 62.2MiB/sec, MD5 OK
SF-1200 nLTT Update (from much earlier today)
59.94TiB host writes, 35.25 NAND writes, 85 MWI, 564 raw wear (equiv), wear range delta 3, 56.2MiB/sec, MD5 OK.
Charts Updated :)
All bar charts are sorted by their respective equivalent of Writes So Far.
Host Writes So Far
Attachment 118797
Attachment 118798
Normalized Writes So Far
The SSDs are not all the same size, these charts normalize for available NAND capacity.
Attachment 118799
Attachment 118800
Write Days So Far
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Attachment 118801
Attachment 118802
Host Writes vs. NAND Writes and Write Amplication
Based on reported or calculated NAND cycles from wear SMART values divided by total writes.
Attachment 118803
Attachment 118804
Attachment 118805
Thanks for your reply Vapor.
I only have a few files on the drive: a word file, an excel file, and an image file. In total they are consuming approximately 2.2 MiB.
Currently:
3500.91 GiB written
14.53 hours
68.21 MB/s average
I will double check this but afaik the F40-A is not 25nm based. There was a slight change in B.O.M with regards to some other chipset components which also gave rise to the firmware versions beyond 2.0. All of the original Force drives are at 2.0. Depending on what new B.O.M. you have, you'll have either 2.1b or 2.2.
Also, afaik, the SMART data is mostly broken in the firmware versions beyond 1.1. The health for example, is rigged to always say 100% (if somebody else finds otherwise, please let me know).
I have no problem with LTT kicking in. In fact, it was one of my plans to figure out just when exactly that does activate. The drive was meant to be expendable, so I have no problem ruining it. I also plan on secure erasing the drive after it kicks in to see if that would restore any of the performance. From your experience, you think that it would most likely not do anything right?
Lastly:
1) What do you recommend I set the compression to?
2) What do you recommend I set the GiB free setting to?
3) Should I hit stop to change these settings, as well as enable MD5?
Thanks. :)
m4 update:
273.4735 TiB
905hours
Avg speed 89.43 MiB/s.
AD gone from 201 to 197.
P/E 4786.
MD5 OK.
Still no reallocated sectors
Attachment 118835
I'll be joining with a second ssd within a week or two. I just ordered a Kingston SSDNow V+ 100 64GB.
@SynbiosVysem, the rest of us are keeping a lot more static data on the drive than that...basically the same amount as a compact Windows install.
SMART isn't broken, the algorithm is just changed. MWI will drop below 100% after 5-10TiB of NAND writes, then count down normally from there.
When LTT kicks in, the only thing that will restore performance is idle time connected to a PC. A secure erase won't do anything (other than restore the drive to a fresh state compared to a used/steady state...but LTT is independent of burst throttling and new/used/steady state). Will be interesting to see what lifetime length your drive is set for (will be able to tell once LTT kicks in). The Vertex 2 40GB (34nm) was set for 1 year with 5000 P/E cycle NAND but we have no idea what Corsair set theirs for (or even what NAND you have).
I'd recommend either 46% or 67% compression. 46% acts an awful lot like Windows/applications/caches and 67% acts an awful lot like documents/pictures (DOCX, XLSX, DNG, PNG, and JPEG).
I'd recommend GiB free set to 12 like the rest of us.
And yeah, feel free to stop/start at will (you don't even need to have "Keep running totals" checked because your drive tracks both host and NAND writes).
@B.A.T, very excited for the V+100!
I think at first (while MWI=100), twice a day should be good (if you can). That way we get a decent idea of how long MWI stayed at 100. After that, once a day or even less frequently is probably fine as MWI drops linearly. With LTT looming, checking in periodically on speeds (even if you don't report) doesn't hurt :)
229.87TB Host writes
Reallocated sectors : 6
MD5 OK
I threw the M225->Vertex Turbo 64GB on a Windows 7 Enterprise 64bit system late this afternoon before I left work to see how it reacts with trim.
I ripped through 20 loops and was at 99.98 MB/s Avg when I left :yepp:
Can't wait to see what numbers it brings in the morning :)
C300 Update
189.23TiB host writes, 36 MWI, 3200 raw wear, 2048/1 reallocations, 62.3MiB/sec, MD5 OK
SF-1200 nLTT Update
66.938TiB host writes, 40.75TiB NAND writes, 82 MWI, 652 raw wear (equiv), 56MiB/sec, MD5 OK
i would agree with that statement, the terrible WA on that drive is just inidicative of how resilient this NAND is.Quote:
Samsung is breaking the 400TiB barrier tonight. Wow! I don't care if its WA is 5 or 5000. This drive is a winner!
If we take into consideration the WA of the samsung, and compare that to the WA of the other drives, what would the expected amount of writes for the other drives be, taking this into consideration? they should basically last forever. lol.
if the write amplification of the Samsung is at 5.14 and it is passing 400TiB, would i be wrong to extrapolate that the NAND itself would last *at least* until 1.9PB for one of the drives which have a WA of 1?
my thought process is that it is beating that nand at a much harder rate than the others are.
Well, 400TiB * 5.14x = 2.01PiB. That's pretty epic already.
If the other 60/64GB drives survive to 2PiB with their 1.00-1.11x WA and 55-90MiB/sec speeds, we're going to be at this for a very, very long time.
Wasn't part of the impetus for this test some site torturing a drive with over 1PB of writes and people doubting it? Looking more and more plausible now.
yes that was the beginning. discussion of a test with a PB written over a year of testing.
The forum was split (well there were a few who thought PB+)(Comp *whistles innocently*)
So Mr. Anvil decided to test. Truly a great beginning to many things for the site, from this testing we have learned very much, from SF throttling, to more about SF throttling LOL and WA, different SMART values, how to figure compression ratios on SF, anvil developing an UBER app, im sure there is some I missed. great stuff though :)
but as this progresses im sure there will be more epic revelations...and with the possibility of a *minimum possible* 2PB on some of these devices, we might start testing next gen, and the gen beyond that, before these things die!
Someone do some math. what would the P/E ratio be for the samsung right now with its 400TiB and 5.14 WA?
Just look at SMART attribute 177's raw value. Over 32,600 right now :)
:eek2::shock::explode::explode2::rehab::YIPPIE::ch eer2::bananal:
Just remember that the WA calculation is our guess rather than a fact. WA of 5.14 would mean that the NAND on the Samsung is actually writing at 580MB/s which is rather doubtful. Looking at how the reallocated sectors are growing on my drive and on the samsung, I'd say we will be getting somewhere soon. And by soon I mean in the next few hundred TB lol
M225->Vertex Turbo 64GB Update:
48.55 TiB
309 hours
MWI 64 (drops by 1 for every 50 raw wear)
1830 Raw Wear
68.09 MB/s avg for the last 16.7 hours (on W7 x64)
MD5 OK
Attachment 118878
m4 update:
279.3588 TiB
924hours
Avg speed 89.39 MiB/s.
AD gone from 19 to 193.
P/E 4890.
MD5 OK.
Still no reallocated sectors
Attachment 118883
Next update will be late monday. I'm going away for the weekend.
Whoa, bluestang...adding TRIM has severely affected the WA.
Your Average WA had been hovering at ~2.43x for the past few updates, only to drop to 2.33x (a very large drop considering Average WA is a figured based on the sum of all updates). It dropped so much because Recent WA (WA based on just the change between previous update and current update) went from 2.39x to 1.89x :eek:
You can do whatever you want with Win7 vs. XP :)
Just found it interesting that TRIM makes such an impact on WA (and it obviously has with speed, too).
I don't use NAND spec for calculating WA. I take SMART attribute C7 and divide by attribute D0 (and adjust units).
SF-1200 nLTT Update
69.5TiB host writes, 42.78TiB NAND writes, 81 MWI, 684.5 raw wear (equiv), wear range delta 3, 56MiB/sec, MD5 OK
9,968.87 GiB
42.58 hours
66.48 MB/s average
Stopping now and changing settings to:
46% Compression
MD5 checking enabled
Min GiB free to 1 (since I don't really want to add more static data)
I want to write a ton of data to this drive. The faster I can get it to LTT the better. :)
Just broke 50 TiB mark :clap:
I think I'll let it run on Win7 over the weekend...gotta love the speed so far... :up:
Attachment 118888
That is a huge improvement bluestang :)
Why not keep it running off W7 24/7 :D
--
232.58TB Host writes
Reallocated sectors : 6
MD5 OK
35.28MiB/s on avg (~22 hours)
406.012 TiB, 1100 hours, sa177: 1/1/33228, sa178: 62/62/380
Average speed reported by Anvil's app has been steady at about 113MB/s.
The other unknown SMART attribute 235 is still at 99/99/2, just as it was when the SSD was fresh out of the box.
64GB Samsung 470
Unless the decay in sa178 normalized accelerates, it looks like 30 to 60 days until it reaches 1. I wonder what happens at that point. Of course, I would not be surprised if the decay DOES accelerate, and there is less than 30 days left to reach 1.
I agree that we do not know for certain that the WA is 5.
However, I disagree that WA=5 implies that the Samsung is writing to flash at 580MB/s. I think it implies that the Samsung is erasing flash blocks at 580MB/s. Not exactly the same thing as writing data to flash at 580MB/s.
what if it is doing them half and half? writing 290 MB/s and then erasing 290 MB/s?:p:
I would also agree that erasing must be done at 580MB/s. On the other hand, writing could be anywhere between 113MB and 580MB because it might need to rewrite some pages from the erased block. Because it can keep a constant speed of 112-113MB, I would say that internally is indeed writing much less than 580MB/s and it has not reached the theoretical max write speed.
Corsair Force 40-A
14,567.35 GiB written
22.17 Hours
MD5 ok
58.79 MB/s
Typical NAND specs:
Array performance
– Read page: 50μs (MAX)
– Program page: 900μs (TYP)
– Erase block: 3ms (TYP)
Hi SynbiosVyse
Could you post SMART attribute values for E9, F1, E6 & the MWI when you do updates? (The RAW 1byte value for E6 works as a count down to LTT.)
Like I said before I'm pretty sure most of the SMART info for Corsair's drives are borked. I started seeing stuff like this happening since firmware 2.0 (firmware 1.1 was okay). This screenshot was JUST taken (as you can see, barely anything has changed since my original, virgin screenshot several posts back):
Attachment 118922
Also notice the health is still at 100%, even after quite a bit has been written to the drive. Vapor believes it will go down eventually, but I will keep an eye on it. I would not be surprised if it is rigged to stay at 100%. I have worked a lot with these drives and with the older versions of firmware the health would drop pretty fast, ever since firmware 2.0 all drives I have seen are at 100%. Perhaps people have just not used them enough, we'll see.
Typical block size 512KiB
8 channels x 512KiB / 3ms = 1.398e9 = 1398 MB/s
So it would only take 3 channels of block erase going in parallel to hit 525 MB/s of erasing. But it may be even faster than that, since I don't think the controller has to hold each channel until the block erase is done, so it can probably have more block-erases going on in parallel than there are channels.
15.0575 TiB
26.29 Hours
58.79 MB/s
MD5 Ok
E9 (if correct) is only showing 5,312 GiB of writes to NAND. F1, host writes, is showing 14,976 GiB. Were you running 0 fill? Now 46% fill?
LTT & MWI are based on E9. Increasing compressibility reduces the amount of writes that get recorded to E9.