:welcome:
Great having a "contest" on the Crucial's :)
Printable View
What are people expecting the outcome between the C300 and the M4?
Should I open my C300 and ensure it's 34nm? I know there's been no controversy, but 34nm supply has to dry up some time....
Unless they continue making 34nm chips and just charge more for it? I mean the C300's are going for more than the M4's of similar size from what I've been seeing.
Ok just ran on my drive wow shocked at the results
Attachment 116885Attachment 116886
21 months of use.... I would of figured I would of burned through it more!
Just saying hello for now.
I've been lurking and reading the thread the last few weeks.
I finally got the registration button to work on the forum.
I bought my first SSD about the same time the testing started.
Went with the Intel 320 120GB.
The longer the testing goes the more I like it.
Enjoying the thread immensly and it looks like more fun to come. :clap:
157.6tb. 18%.
Only posting to say thanks to all the testers :)
This thread is a real mine of information :D
wow.. I've had my X-25V drives for closing in on 15 months, mine are reporting 8142 and 8138 hours power on time, and 0.99 and 1.09TB writes...which is probably close to about 1/10 what I thought I have done on them so far and they have a MWI of 98, with available reserved space of 100 and a 10 threshold
Thanks for the warm welcome :)
Before we start the crucial test we need to agree on the config.
- How much should the ssd be filled with static data? 40 vs 64 GB
- What parameters should we use on Anvils app
- How much random vs seq?
Something i missed?
I'm expecting them to exceed the 72TBW guarantee that Crucial has put on them, hopefully they'll get up there with the Intels.
As there is more NAND they should match the Intels but as we know, the controller can make the difference :)
I'm pretty sure there is 34nm NAND in it but maybe we should all open the drives that enter the Endurance test?
Appreciated :)
I'm not shocked at all, I've got plenty of SSD's that are low on writes.
As long as it's used for normal tasks it just doesn't write as much as one thinks.
My default setup is w/o the pagefile, system restore and the hibernation is off as well, these things will make a difference.
But, as we all can see, you shouldn't really worry :)
--
119,49TB Host writes
MWI 34
Still at 6 reallocated sectors.
I mentioned in my post that I put a ~40GB static file on the SSD. To be precise, it is 41992617078 Bytes (I imagine that is a typical amount of static data for a 64GB SSD) And Anvil's app and data is on the SSD. All settings in Anvil's app are the default, except that I checked the box about keeping running totals for GB written (option just added yesterday).
For reference, the md5sum of the 42GB file is: 0d1c4ec44d9f4ece86e907ab479da280
Now we have more drives joining in who will track all the data?
This chart can go into the negative for wear out, but can only assume a negative MWI value based on average writes to date per MWI value.
It's also hard to see all the data when it is small. I had to take out all the hard data as it was too small. The Y axis only represents TB for One_Hertz. With more drives it will get a bit harder, but it would be good if all drives could be on one chart.
Attachment 116906
Alright, got a file ready for myself, weighing in at 42,022,123,868 bytes. C300 64GB should start tomorrow so long as UPS sticks to their delivery date.
I'm also going to test a SF-1200 drive now that the prospect of no-LTT has emerged (and if anyone wants to test a 25nm vs. my 34nm, let me know...easier to arrange testing and setup in pairs!). With a Sandforce back on the scene, I wanted to examine the compression settings in Anvil's app and see if any were suited to mimic 'real' data. With the discovery of the 233 SMART value, we can now see NAND writes in addition to Host writes, so if we can also write 'real' data we can kill two birds with one stone: see how long a drive lasts with 'real' use and how much the NAND can survive.
So what did I do?
First, I took two of my drives, C: and D:, which are comprised of OS and applications (C:) and documents (D:, .jpg, .png, .dng, .xlsx probably make up 95% of the data on it) and froze them into separate single-file, zero compression .rar documents. I then took those two .rar files (renamed to .r files...WinRAR wasn't too happy RARing a single .rar file) and ran them through 6 different compression algorithms: WinRAR Fastest RAR setting, WinRAR Normal RAR setting, WinRAR Best RAR setting, 7-zip Fastest LZMA setting, 7-zip Normal LZMA setting, and 7-zip Ultra LZMA setting. I then normalized the output file sizes.
Doing this created two 'compression curves' showing how my real data responds to various levels of compression. My thinking being that if any of Anvil's data compressibility settings had similarly shaped and similarly sized (after normalization) outputs, it would be a good candidate to use to mimic real data and allow the use of 'real' data with SF testing. Real data != 'real' data; 'real' data is just the best attempt to generate gobs of data that walk, talk, and act like real data. A great candidate would be a generated data set that had a compression curve between the two curves from real data, across the entire curve.
Attachment 116904
Once I had those curves mapped out, I made ~8GB files of each of the various settings with Anvil's app (0-fill, 8%, 25%, 46%, 67%, and 101%) and made curves for each of them.
All put together, they look like this:
Attachment 116905
The green zone is where the potential candidates should show up. Only one candidate was in that range, however: 67%. Unfortunately, it fell out pretty aggressively with stronger compression algorithms. So I turned off the "Allow Deduplication" setting and generated another 8GB file and compression curve and it was a little better.
While dedicated hardware can be magnitudes more efficient than a CPU with an intensive task, I do doubt the SF-1200 controller's ability to out-compress and out-dedup even low resource LZMA/RAR (R-Fastest and 7-Fastest), so the left-most part of the green zone is a stronger green as I feel that's the most important section of the curve. Unfortunately, I don't have the ability to get more granular compression curves at the low-end (left side) of the curve, so I'll have to make do with overall compression curves with just an emphasis on the low-end.
Of all the data I have available it does look like 67% compression setting with "allow deduplication" unchecked seems to be the best fit for use as a 'real' data setting for when I start testing SF-1200. Hopefully anybody else who plans to test a controller with compression and deduplication will find this useful as well :)
I'll PM you the excel file. If nothing else it has the raw data. It will be nice to join the peanut galley. :D I've been spending way too much time on that V2.
If you run at 67% it will be interesting to see if you get throttled at some stage. I suspect you will.
EDIT:
How will you record writes? 231 & 241?
@Vapor
Superb job on collecting compression data and yes, it's based on 7Zip Fast compression ratio. (could be Fastest, will check)
Looking at my tests on the SF2 controller it couldn't keep up with the ratio that 7Zip Fast(est) produces, not sure how the SF1 handles vs the SF2.
For reference, I 7Zipped one of my VM's earlier today (Windows Server 2008 R2, SQL-Server, + some data) and it ended up being ~50% of the original size using 7Zip Fastest.
Still it took 40minutes to produce that file :) using an W3520@4GHz on an Adaptec 3805 hosting a 3R5 volume, there is no way that the SF controller is able to achieve that sort of compression (on the fly) as 40GB is written at a rate of ~100MB/s using a 60GB SF1 drive. (based on steady-state)
I'll do some more tests when I get a few more of the items off my to-do list.
I have filled up my M4 with os, apps and some music. A total of 42220109824 byte.
Attachment 116908
Last thing before we start tomorrow is configuration of Anvils app.
17.586 TiB written, 54 hours, sa177: 72/72/1392
I included screenshots again today, but on subsequent updates I will just post the numbers above. I'll also post smart attribute 178 if it ever changes.
2 days:
Attachment 116910
Attachment 116911
Nice job, and beautiful graphs!
There are two more data points I'd be interested in seeing: the compression your Sandforce drive achieves on your C: and D: not-compressed archive files.
I guess you can measure it by just observing the SMART values on the drive, then copy one file to the drive, then look at the SMART values again to find the compression (assuming that attribute for actual flash writes is accurate). Maybe you have to delete the file and re-copy it several times to get an accurate measurement?
I have zero doubt hardware designed for compression/dedup could do twice (at least) what our CPUs do with just a 1W power envelope....but that doesn't mean the SF1 and SF2 controllers can do it. It's a safe bet they can't and their compression levels are weaker than the weakest RAR/7zip setting--too bad there's no way of running their compression levels on our CPUs to see what they can do with more precision than the 64GB (or 1GB SF2) resolution the SMART values give.
Almost done with the charts of all the drives so far (minus the V2-40GB...not sure whether or not to include that as testing essentially errored-out). Including a new chart with normalized writes vs. wear, which is kind of necessary considering drives of different sizes are getting entered into testing; writes will be normalized to the amount of NAND on the drive, not the advertised size.
Working on bar charts with writes from 100-to-0 wear as well as total writes done so far. 100-to-0 wear will be extrapolated until MWI = 0 and then frozen...so when MWI > 0, total writes will be less than 100-to-0 but after MWI hits 0, total writes will be greater than 100-to-0. Would "MWI Exhaustion" be a better name for the 100-to-0 bar?
Whoa, that is still running fast. It took One_Hertz 9 days to write that much. :p: (longer still for Anvil)
Not looking like it :eh:
Attachment 116912
This isn't even the normalized writes chart where the Samsung (and C300 and M4 and later SF-1200s) will be 'corrected' for being larger.
I was going to add and it might finish quicker as well :)
The Kingston SSDNow V+100 would be a good one to test. (~200MB/s sequential writes)
Here is part of a summary from Anandtech
"The second issue is the overly aggressive garbage collection. Sequential performance on the V+100 just doesn't change regardless of how much fragmentation you throw at the drive. The drive is quick to clean and keeps performance high as long as it has the free space to do so. This is great for delivering consistent performance, however it doesn't come for free. I am curious to see how the aggressive garbage collection impacts drive lifespan."
Also, if you measure the write speed of your C: and D: drive files when copied to the SF SSD, and also measure the write speed of a completely randomized data file (for example, encrypt your C: or D: drive file), and take the ratio of the write speed of the random file to the write speed of the C: or D: drive file, that should give you an independent estimate of the compression ratio achieved by the SF controller. It would be interesting to compare the compression ratio estimated with that method from the compression ratio computed from the SMART values of the drive. Just to be sure SF is not pulling a fast one!
I'd like to know how the C:, D: and 67% No Dedup fare with the SF compression as well....but with just 64GB resolution from SMART, I'd probably have to write 2-4TB of each of them (without writing anything else to the drive) to get any meaningful numbers. At just 44.9GB, 23.8GB, and 8.14GB respectively, that's a ton of repetition :eh:
If there's a way to do this without copying, pasting, and deleting 100s of times, I'm all ears. :)
I've implemented the option to make a pause after each loop, configurable from no pause up to 30 seconds.
Should I set the default option at 15 seconds just to give the controller some time to breathe (having deleted all the files created during the loop) or should I set it to 0 as in no pause.
(the pause would give the SF controller some time to recover)
All in all what is 10-15 seconds per loop, well, it's up to you :), we'd have to agree on this, don't think there are any implications by introducing this option.
Hmmm, I thought Ao1's drive was reporting in GB (or GiB?) written. If yours only reports in 64GB, then I see the problem.
You could try the write speed ratio I mentioned, just take the ratio of write speed for a totally random file to the write speed of one of your real-data files. This would not be the most trustworthy estimate, but it would be interesting.
For the graphs, we should be careful to get the bytes written correct. Anvil's app actually reports TiB written, even though it is incorrectly labeled as "TB". I'm not sure what the correct units are for the numbers One-Hertz and Anvil have been posting in their updates. In my updates, I am reporting TiB written, taken from Anvil's app.
As One_Hertz said, the SMART data is apparently in TB. And outside of Windows, I have seen most programs use the units correctly. Certainly most linux programs don't have the Windows bug of displaying incorrect units. Many of the linux command line utilities allow a choice of which units to display.
GParted displays in GiB and TiB. Palimpsest (linux disk management util) displays in TB and bytes, so you can see it is using the correct unit labels. GSmartControl displays in TB, TiB, and Bytes.
The difference between TB and TiB is significant and worth paying attention to, IMO. It is about a 10% difference, so I don't want to get it wrong...
Okay, here's what I have for charts...I'll probably update them once a day. With permission from Anvil, I'd like to be able to post them in the OP as well. I would like to include one more subset of charts, but I'd need to know everyone's typical write speeds.
Raw data graphs
Writes vs. Wear:
Attachment 116919
MWI Exhaustion:
Attachment 116921
Host Writes So Far:
Attachment 116922
(bars with a border = testing stopped/completed)
Normalized data graphs
Writes vs. Wear:
Attachment 116925
MWI Exhaustion:
Attachment 116924
I'd like to note that the Vertex 2 in these charts will probably go away when the 60GB version(s) gets going now that we know SMART 233.
Didn't know SMART was using and displaying as TB, figured it was playing into the Windows scheme (which I don't mind...TB as 1000*GB has no use, IMO). And yeah, the difference is pretty big in the TB/TiB range...this is definitely something we need clarification on, I've been under the impression that all utilities report TiB, regardless of what they call it.
Easy enough to fix as long as I know what, specifically, is broken.
EDIT: I have all the charts in my spreadsheet fixed to TiB assuming that SMART reads out TB in all drives so far (so every drive is adjusted except the Samsung 470). Is it a correct assumption that SMART = TB universally? I don't want to keep posting updated charts that are wrong :p:
Hard to say, universally. But anyone who has a drive running Anvil's app that has host writes as a SMART attribute (which Samsung 470 does NOT), can easily check what the SMART attribute is reporting, by comparing GiB written (labeled "GB written") in Anvil's app with whatever the SMART attribute is reporting at two different times, and checking to see if the difference in each number tracks as expected.
On the M4 the 0xAD value must be multiplied with 64GB (got this from Anvil) Easiest way to keep control is to enable "Keep running totals" in Anvils app.
Attachment 116926
Running Anvil's app on my V2 50GB 2R0 array now....will know how a V2 scales within a couple of hours (even with the 64GB resolution).
And 0-fill with compression is pretty nifty in a case like this, hah...abnormally high speeds with no real wear on the NAND itself.
You have my permission :)
Great charts!
On the TB vs TiB matter, I've been reporting as reported by CDI and it looks like its TB and not TiB.
I see the issue with the drives that aren't reporting host writes so what do we do?
It's an easy fix, all data is written in bytes so it's just a conversion.
I'll probably end up making it an option, displaying TB or TiB, so which one should be the default one?
For a Vertex 2, I'm pulling ~63.2GiB of writes (per Anvil's app) per 64GB jump in SMART. I was expecting 64GiB or 59.6GiB....not something between. Will run it more.
EDIT: Shifting more to ~63.4GiB per 64GB now. Putting it out of range of even the non-existent 1000*MiB = GB (62.5GiB = 64 of those). Must be some 1% transfer overhead or something getting counted? But overall, it looks like 64GB SMART = 64GiB with Vertex 2. Now to test my Intel X-25M 80GB G1.
I vote for TB for default (I'm assuming you mean the computation, not just the label display).
There is a good reason why all scientific and engineering disciplines (except computer science) use base-10 unit prefixes. They are easy to work with in a base-10 number system! If I need to add up 111TB + 222GB + 333MB + 444KB, it is easy: 111.222333444 TB. But try 111TiB + 222GiB + 333MiB + 444KiB. I need a calculator and a lot of keystrokes to get 111.217114862 TiB.
If you change it, I should still be able to enter an updated number, right? I can just stop the app, convert from GiB to GB, enter the new number, and restart the test?
Was going to do a multi-TiB test, but found it easy enough to stop/start the Anvil app and refresh SMART manually every ~.67GB (2 seconds) when I expected it to turn over. First differential was 63.3GiB, second was 63.2GiB, third and fourth were 63.4GiB per 64GB jump.
I vote for MiB, GiB, and TiB calculation because that is what's used in Windows and seemingly everything except product labels. And MB/GB/TB can be interpreted as either 1000^x or 1024^x (hence all the confusion and puzzle solving in the past few posts) whereas MiB/GiB/TiB have one interpretation. Yes, 1000^x MB/GB/TB is easier to add, but I'm not sure how often that issue comes up or will come up with this testing. :shrug:
EDIT: Happened to get one of my drive's SMART 233s value to be within 2GiB of turnover (the two drives had identical values and are only identical for a 2GiB window)....going to run brief compression/write amplification tests now. May not be pertinent since my daily driver V2s are on an old firmware and in RAID, but who knows :)
Nothing too monumental, but on my setup (2R0 Vertex 2 50GB with FW1.10, 32KB stripe), Anvil's "67% Compression with No Dedup" setting has a write amplification of ~1.185x. 192GB of 233 SMART took only 162GiB +/- .7GiB. Will retest on a single drive with newer firmware later unless somebody beats me to it.
I have no way to write, in bulk, my C: and D: stores to my array, so that testing will have to wait as well.
So that is a total of A = 271.9788 GB written by Anvil's app in four trials, in which (I think you mean) the SMART attribute increased by 256.
271.9788 / 256 = 1.0624172
(1024/1000)^1 = 1.024000
(1024/1000)^2 = 1.048576
(1024/1000)^3 = 1.073742
(1024/1000)^4 = 1.099512
So the units of the SMART attribute are hard to explain. They are closest to GiB, but still about 1% off from GiB.
I guess there is a bug in either the Sandforce firmware or Anvil's app.
121,33TB Host writes
MWI 33
--
I've done some checking and I do think we'll have to leave things as is, I'll change the labels to MiB, GiB, TiB and I'll put a "Hint" on TiB so that when the mouse is hovering on TiB written it will display TB written in the "Hint".
I've been through the source code and have adjusted for one event that wasn't recorded, the initial creation of the file used for random writes.
So, the GiB counter did not count the bytes written for when that file was created, essentially that means every time the app is started.
Having said that, this counter was never meant to be the only counter but we'll make it work for that as well. (as close as possible)
There is another source of writes and it is the internal database used for recording the loops, for every loop the stats are recorded so there is a 4KB blocksize update/write + the update of the Index. I'll check if I can get to those writes and add them to the running totals. (they don't amount to much but they are definitely producing writes)
Vapor, you can check the number of bytes written by the app by using the Task Manager->Processes
From the View->Select columns... menu option Enable I/O Read Bytes and I/O Write Bytes.
Yeah, A = 272GB or 253.3GiB for an increase of 256 in the SMART attribute. Each 64 increase was very consistently 63.2-63.4GiB too. Until more data says otherwise, I'm going to take SMART attribute readings for SF-1200 to equal GiB.
Just ran 135.95GiB of writes to my X25-M 80GB G1 and it showed up as 136.72GB in SMART (and damn my G1 is slow and stuttery with this...35MB/s and a stall once every 5-10 seconds). It would be good if someone could double-check this with more writes--my G1 just doesn't want to take part in the party.
Started it back up on my Vertex 2s with Task Manager running, will let it run for awhile and see what differences there are, if any.
354.69GiB written and reported by Anvil app vs. 354.6975GiB reported by Task Manager. Looks like any discrepancy is at the disk level or not there at all and just showing up due to the way I was measuring.
Going forward, it looks like A) it's totally fine to use Anvil's reported writes and B) SMART values are most likely in GiB.
I agree with A. But not B. From your data, it seems like the Sandforce SMART attribute has a bug. The attribute reports 256 of SOMETHING when Anvil's app reports 253.3GiB = 271.9788GB. The SMART attribute may be INTENDING to be in GiB, but what it is actually reporting is a unit that is about 98.9% of a GiB (about 1,062,000,000 Bytes). As you said, it is consistent -- consistently short of a GiB, meaning the firmware is recording 1% more writes than actually occurred if it is intending to be GiB. Alternatively, for all we know, it could be INTENDING to be in GB, but somehow it is missing some writes -- about 6%.
Any update on the C300 vs M4 debate?
We will start as soon as Vapor gets his C300 in the mail today.
As far as Sandforce on-the-fly compression goes I think it would be very interesting to compare that to a different brand of SSD using NTFS compression. Then you'd basically have on-the-fly hardware compression vs. on-the-fly software compression. Sandforce is basically just using Stacker on a chip. It would be nice to estimate what if any performance hit you would take by just running your Crucial or Intel drive compressed. Even if it does reduce performance slightly it might be worth it if it increases the write endurance of the drive to match Sandforce for compressible data. It also might be interesting to try to use NTFS file compression with a Sandforce drive. I wonder what would happen. Would it reduce the size of the files even more? Make them bigger?
I remember back when Stacker was popular. Lots of people ran with fully compressed drives in those days. It would effectively double the size of the drive. In the SSD new world order with 60 GB and 120 GB drives becoming common it would seem to have found a place again. Strangely, I can't find a single third party drive compression program similar to the old Stacker. Is the built in NTFS compression really so good that it can't be improved? I vaguely recall some app that used zip files to implement an automatic compression scheme, but I don't remember much about it. I'm also curious about how such a scheme would work with a large ramdisk.
What are you using to read the SMART data?
I noticed that the RAW smart value from all Intel SSDs is in units of 32 MiB. But CrystalDiskInfo displays a field at the top called "Host Writes _____ GB", where it converts the 32 MiB unit field to GiB, but labels it GB.
It is really sad, it seems that almost all Windows programs incorrectly label GiB as GB, but almost all linux programs correctly label the units (and most give you a choice of GiB or GB). Incorrectly labeled units cause a lot of confusion. Besides the problems we are having monitoring TB written to our SSDs, the Windows trend of labeling the units incorrectly has apparently convinced countless newbies that formatting their drive somehow decreases the capacity by 10% or whatever. Arghhh!
Attachment 116974
Attachment 116975
wow one hertz, your getting to the end (possibly) this is where the good stuff starts.... zero percent left :)
At 14:00 GMT+1 (summertime) I startet up the endurance test of my M4.
It's been running for 90 min and is doing well so far. If the speed stays at this level I'll be writing about 200 TiB in July.
Attachment 116980Attachment 116981
How often do you want me to post updates?
B.A.T, what is the Crucial equiv of MWI? AD, 05, or what? Know it when we see it move? :p:
I think an update of like once a day should be good, maybe a little more often as 8TiB/day is very fast.
Should be running on the C300 within a few hours now :)
I wonder who will get 0 MWI first, johnw or One_Hertz?
In that case, you are actually reporting TiB, I think. I don't have an Intel SSD that has written more than 999GiB, so I cannot tell for certain what it reports at that level, but my Intel toolbox is definitely reporting in GiB, even though it labels it incorrectly as "GB".
You can double check by looking at the raw value for attribute 225 (0xE1). It is in units of 33,554,432 Bytes = 65,536 sectors = 65536 x 512 = 32 MiB .
Intel SMART attributes:
Attachment 116984
Attachment 116985
26.387 TiB , 79 hours, sa177: 56/56/2174
If that 2174 is the average block erase count, and 56 is a number from 100 to 1 that normalizes the block erase count from none (100) to nominal max (1), then Samsung would appear to be specifying about 5000 block erases for their toggle flash (4960 at normalized 1, 5009 at normalized 0 ).
The problem is that if 2174 is the average number of block erases, then the WA is huge. Assuming 64GiB of flash on board, 2174 erases comes to 136TiB, which when compared to the 26.4 TiB written by Anvil's app, results in a ratio of 5.1.
If we take into account the 42GB of static data I have on the SSD, and assume Samsung does NOT do any static data wear leveling (I hope that is not true, but for the sake of argument just go with it), then there is only about 24.9 GiB free for Samsung to erase. 2174 * 24.9 GiB / 1024 = 52.8 TiB , which divided by 26.4 TiB results in a ratio of 2.0. Still seems high for WA.
I cannot make sense of the data. Anyone else have any ideas?
I suppose we will find out if Samsung has terrible write amplification in a few days or weeks -- if the Samsung 470 dies after much lower TiB written than the Intel or Crucial SSDs. Although if that happens, it will be ambiguous whether the early death was a result of high WA, or whether Samsung toggle flash has lower endurance than IMFT flash.
If only I could find a Samsung datasheet for the 470 SSD series SMART attributes. Intel and Micron both document their SMART attributes clearly in a datasheet. But if Samsung has such documentation, I cannot find it (their website is a total catastrophe -- even some of the consumer manual PDF files for the Samsung 470 SSD are broken)
Got home about an hour ago for the long weekend...UPS attempted delivery of the C300 about 2.5 hours earlier than normal and I missed it. :mad:
Called to arrange to pick it up this evening from their warehouse...was hoping it would be there before 6:45ish but won't be there until 8PM and I'm going to be at a July 4th fireworks show then.
Have had over a dozen UPS packages attempted to be delivered and I missed it, only for them to go to the front desk and have someone sign for it there and have it wait for me. First time I've ever had a package not delivered and no special signature is required....grumble, grumble.
Oh well, C300 starts midday Tuesday and I will be home that day to receive it.
I had been posting once a day, but I may post twice a day now, since as Vapor says, things are moving fast for these SSDs.
I notice that Micron does not appear to have a host writes SMART attribute. So are you reporting the TiB written from Anvil's app?
I also notice that your SSD has a threshold of 10 (not 1 or 0) for attribute 0xAD = 173 "average erase count of all good blocks". That "10" seems an odd value for threshold. Your raw value is currently 7. If we assume your SSD has 64,023,257,088 Bytes worth of good erase blocks (from Micron datasheet), then that comes to 417GiB of writes, as compared to your screenshot, 419.79 GiB written. So it seems to be reasonably good agreement.
Maybe we can start graphing the raw value of attribute 173 for the m4 and 177 for the Samsung 470. For the Intels, does attribute 233 (0xE9) have a raw value that we can graph?
Micron (Crucial) SMART attributes:
Attachment 116986
Attachment 116987
I'll post screenshot of Anvils app two times a day, and from 0xAD.
I had forgotten to turn on High power mode so my computer rested for a couple of hours while I was out. It's back running again and it will stay that way
btw does this look right? My 0xAD shows 11 = 64GiBx11=704GiB but Anvils app shows 1 TiB
Attachment 116994Attachment 116996
BAT
Change from HEX to DEC in CDI, (function -> Advanced -> raw values)
11h = 17
--
123.49TB Host writes
MWI 32
No other changes.
Updated charts :)
Raw data graphs
Writes vs. Wear:
Attachment 117000
MWI Exhaustion:
Attachment 117002
Host Writes So Far:
Attachment 117001
(bars with a border = testing stopped/completed)
Normalized data graphs
Writes vs. Wear:
Attachment 117004
MWI Exhaustion:
Attachment 117003
In defence of the Samsung at the speeds it is writing it probably doesn't have any time to manage WA. Also faster write speeds leave less time for the NAND to "anneal". I'd guess this will become evident when the drive gets to MWI 1 (0 does not exist , at least on Intel drives).
As PE specs are based on minimum levels (not average) it should be possible to write well past MWI 1. I somehow doubt the Samsug will be able to though.
It would be good to see a representation of the writes speeds on the charts to balance things out a bit. One_Hertz for example took 14 days to write data that the Samsung has written in 3.3 days. One_Hertz MWI dropped from 100 to 86 over ~26GB of writes.
Also it would be good to see the relocated sector counts. :p:
Agreed, this is why I asked for typical write speeds :p: If I had that, I'd make a chart of wear vs. write-days.
Seems just the Intels and Crucials have reallocated sector counts? Or does the Samsung 470 have it as well (SMART value 178 maybe)?
Here's the chart with the reallocated sector counts from the Intels added:
Attachment 117012
vapor for the V2 the write speeds varied depending on loop cycle. It got faster as the cycle extended. I would say that 45MB/s was a reasonable average.
John, I appreciate the Samsung is a bit of a nightmare to monitor. :)
The other thing that will be interesting is how/ if that drive manages to even wear out by rotating static data at the speeds it is writing.
2241 GiB written, Wear Leveling Count and Percentage of the rated lifetime used has gone from 100 to 99.
Attachment 117015
So eager to see how the C300 performs. It will surely beat the M4 but will it beat the Intel ones ?
Only time will tell us that. It'll be an exciting competition :)
Maybe data per day rather than MB/s would give a better, more even picture. :shrug:
EDIT - When I say avg, that is based on what it would typically be when I looked at it. It was slower, but never much faster. (Not with compressible data anyway)
Anvil, a avg write speed counter would be quiet handy ;)
Indeed we also should be able to see the speeds. A drive lasting a decade at 10 MB / s is useless to most people.
Will have a look at the avg speed.
My "X25-V" starts off at ~40MB/s and at the end of the loop its in the low 30s.
I do run the toolbox about every 24 hours, if I don't, it runs a little bit slower.
Anvil and One Hertz
Per the Kingston and Intel.
Wondering if you have checked read/write speeds lately? If so maybe post them.
Just wondering if they have maintained or dropped from the heavy usage.
I haven't seen them posted lately.
Thanks
Mike
My speeds are the same as when they were at the start... It has not slowed down.
Thanks One Hertz.
Based on write-days estimation, here's some stats:
X25-V: ~20 days from now until MWI = 0, based on 33.5MB/s average writes. Projected MWI Exhaustion in ~65 days (from a new state).
320 40GB: ~7-8 days from now until MWI = 0, based on 42MB/s average writes. Projected MWI Exhaustion in ~55 days (from a new state).
V2 40GB: projected MWI Exhaustion in ~55 days (from a new state) if LTT were disabled, based on 45MB/s average writes.
Samsung 470: projected MWI Exhaustion in ~6 days (from a new state), based on 140MB/s average writes.
Will post up charts when I get better/confirmed write speeds...all but the V2-40GB are just my guesses based on screenshots in this thread. Of course, it is also too early to tell if the speeds seen so far are sustainable until the end of a drive's life (and we don't even know if MWI = 0 means imminent death).
30.895 TiB , 91 hours, sa177: 48/48/2616
You can get average write speed by taking the difference between this and the previous TiB and hours.
If I need to stop the test for more than a few minutes, I will note it in my next update, so the speed calculation can be adjusted. Otherwise, you can just assume the test was running continuously for all the hours since the last update.
4,6361 TiB, 19 hours, Wear Leveling Count and Percentage of the rated lifetime used has gone from 99 to 98.
Attachment 117044
Nice results. Eager to see after MWI = 0 point !
125.2TB Host writes
MWI 31
Still at 6 reallocated sectors.
--
Some other stats
5.6229TB written over 416 loops 48.9 hours ~33.5MB/s on avg.
(includes 22,623.89 MB of random writes)
Well, that was the first 24h. At this point the M4 has written 6,0485 TiB. Wear Leveling Count and Percentage of the rated lifetime used has gone from 98 to 97.
Attachment 117053
Same goes for my M4. It starts around 40MiB/s and tops off at around 92 MiB/s
I'll have a look at it, it might have changed since I started doing manual trims.
The new version with the 5 second pause started running today as well.
Will be performing a manual MD5 test later today, so I'll monitor one loop for throughput at that time.