Hmmm, I've been trying to get an X25-E for a few years now, they just aren't available in SA. Is anyone willing to sell one at a good price? :D
Printable View
Hmmm, I've been trying to get an X25-E for a few years now, they just aren't available in SA. Is anyone willing to sell one at a good price? :D
Its been a while since the last update for M4. We are spoiled little brats over here....:D
Actually, I bought the X25 E 32GB for $88. It only had 280GB of host writes on it when the post office dropped it off Thursday.
We call that a win where I'm from.
The "new" Agility60 comes in the cheaper plastic case, and in my testing with it, scales terribly in softraid with my older one (There were only a couple in stock, but the 30GB models around still). Writes scale great, but reads don't. My older Agility has only about 1.46TB of host writes, but has an average PE count of >1600 (which would be equivalent to around ~100,000GBs. There are a couple reasons for that, but mainly because its been abused in different un-Indilinx friendly conditions -- Win7 with trim is the only way to go.
On the other hand, my 120GB Vertex Turbo should arrive in the mail on Tuesday. I'm waiting to get it before buying another one, but after seeing the impressive results of the M225 > Vturbo, I couldn't pass up the opportunity to buy one new for $1/GB.
bought a new X25-V cheap last weekend at a brick and mortar, but my opinion on endurance testing those is another one won't really say much. And I'm in the mood for destruction.
I do have a 510 120GB, and it would certainly put down high average numbers as well, but my plan is to use my laptop for endurance testing. I live in a tiny urban apartment and I can just close the lid, stick it under the couch, and pull it out to check it's progress (and it's only Sata II, C2D, but with AHCI).
Besides the two 6gbps controllers, I don't really see much out there for something different, but I want to do something. The Phison controlled Torqx 2 might have a decent average speed under endurance testing as well (and it's certainly different), but I wanted some group consideration before jumping in. When do the new Samsung 6gps drives come out at retail? I think they're shipping in OEM laptops right now.
Maybe we can see if anyone is willing to donate a Intel X25-E if SLC is really that much better etc. Or maybe get a separate thread for SLC drives ?
They both use Intel 34nm, but have different product numbers. RyderOCZ was kind enough to tell me which NAND they used:
New
JS29F32G08AAMDB
Old
JS29F64G08CAMDB
I'm not sure what those bolded numbers represent
The old one had higher correctable bit errors and much higher WA, but that was due in large part to me using it in sub optimal conditions. I recently tried updating to the 1.6 FW to try and reduce those. I'm not sure why those smart reported bit errors are caused, but it seems excessive. I have an Excel spreadsheet with SMART attributes that figures # of read/write sectors to GiB host/reads writes, etc. I was really surprised once I started looking at it in detail -- I used that drive to clone drives, install random linux distros to, and then used it on a laptop with Vista/no trim for quite some time.
If the throttling situation was sorted out, I'd pick up a 60GB SF2200 25nm drive in a heartbeat for endurance testing.
Intel 311 20GB is a readily available, low-ish priced SLC drive if anyone is really intent on putting an SLC drive through its paces. Probably fast (for SLC device) to die too, considering how little NAND it has.
C300 Update
348.1TiB host writes, 1 MWI, 5872 raw wear, 2048/1 reallocations, 63.05MiB/sec, MD5 OK
SF-1200 nLTT Update
208.688TiB host writes, 151.406TiB NAND writes, 27 MWI, 2442.5 raw wear (equiv), wear range delta 3, 56.15MiB/sec, MD5 OK
The write speed is cut in half from the X25 Es, but that's still pretty high, especially when you consider Avg write speed / capacity. Its basically the X25-V of the SLC world with its controller population taking a big chunk out of performance. I just don't think its possible to wear the drive out in any sort of reasonable time frame.
It would be killer to have a triplet of Larson Creeks in Raid 0... You'd have like 600mb reads and 300mb writes in 60GB of inexhaustible awesomeness. That's a commitment though - it would take decades to wear them out (probably).
I think you are correct.
From post #1362 it looks like erase cycles get slower and programming gets faster as the P/E cycles move towards the end game. To offset that this chart from SF shows the controller overhead that is incurred as the P/E cycle count increases. I'd guess it would be the same for all SSD's that are good at reducing WA, so when the blocks with high wear are replaced write speed should increase.
Attachment 119990
Nice idea with the RAID array. But maybe just one 20GB Larsen Creek will be enough for this test.
'Hey everyone here (everyone but me) I think it would be great if someone here (anyone else but me) could test the endurance of anything I suggest.'
Sorry, hate to be the a$$hole here, but had to get it off my chest. If I'm out of line, then I appologize and will take the punishment. :(
Anyways, thanks to all the "Testers" here and everyone else who has contributed and helped out tremendously. And Anvil for his awesome Utility. :up:
It takes alot of time and effort to do all this and I say thanks! :clap:
261.48 hours
164.5770 TiB written
40.95 MB/s
MD5 ok
05: 0
B1: 80
E7: 28%
E9: 115328
EA/F1: 169344
F2: 256
Well, if you were ( like me ) a student in the UK with potentially 50 000 GBP debt from university and a 20% chance of being unemployed after finishing my degree you would understand why I cannot test anything myself :)
One link for you to consider : http://www.telegraph.co.uk/education...60k-debts.html
Yes the actual PE cycles comes out as a bell curve distribution..
when rated at 5000 PE cycles, that's covering the vast majority of the devices at a stated ECC level and data recoverability.
The NAND is also rated at 5000 PE cycles for a given ECC level.
If you use more bit error correction than spec'd, you get 'higher' PE cycles.
If you use less bit error correction than spec, you get 'lower' PE cycles.
Note that the NAND doesn't just quit working, you are constantly just increasing the raw bit error rate probability of the NAND,
increasing the probability of the NAND returning data that is uncorrectable by the controller's ECC/data recovery algorithms.
That being said, it is possible for a SSD to lose data on any PE cycle prior to its NAND rating, only the probability is very low of that occuring.
Most of the same error rate probability stuff goes for HDD, only their bit error rate progression is more of a linear progression over time..
32Gbits/chip and 64Gbits/chip. Both use 32Gb dies.
First one is Single die 1CE, Second one is dual die 2CE.
Or you could leave the test to the SSD engineers... :wave:
If you have low-level access to the drive, you can write your own basic firmware that has no wear leveling and records certain ECC correction information.
Then you can just hit logical flash block 0 with 100,000 PE cycles (or the actual data failure point), then block 1, block 2, etc,.
You can also pull raw bit error rate data versus PE cycles.
There are some engineering flash testers that do this for you on the market today, but its not within an enthusiasts budget..
However, we actually end up doing this testing with fully built SSDs, so that we can test the flash in extreme conditions (industrial temp conditions -40C to 85C, thermal cycling, thermal shock, voltage margining, EM interference, radiation bombardment..)
I 'm surely not suggesting that would be an effective test to stripe three of those (rather, just fun to play with), but with every passing day I get less and less concerned about effective MLC lifespan. Even Indilinx controllers, which started out with a shaky track record, have become more and more effective with every firmware release. That's why I think it would be years before you could put a dent on a Larsson Creek -- unless everything we
ve been told about SLC is wrong (and it could be wrong the other way -- 2x as many PE cycles in practice).
No, you're right. It does get old really fast (like, months ago). Considering that an SSD for testing can be purchased new for $100, just about anyone who has internet access should be able to afford one by saving up their spare change for a few months, or skipping eating out or a movie once in a while.
404TiB. 4072 reallocated sectors. Looks like no dying any time soon :(
yeah, no kidding
I am also incredibly impressed with both the results and willingness of the participants, it sure has taught me a lot about not needing to baby my SSDs nearly as much as I have been, lol. on a slightly OT to this- does anyone know of any good methods of running drive maintenance on raid0 X-25Vs...or is stripping them out of a raid config to run TRIM pretty much the only viable method to restore "factory fresh" type running conditions?
m4 update:
511.3357 TiB
1666 hours
Avg speed 91.01 MiB/s.
AD gone from 77 to 58.
P/E 8964.
MD5 OK.
Still no reallocated sectors
Attachment 120008Attachment 120009
Kingston V+100
I'm still trying to figure out why it drops out, so the test is still halted.
311.23TB Host writes
Reallocated sectors : 6
MD5 OK
22234.2GB written (Last 7 days)
I short stroked mine, but I use the array just for a couple steam games like Civ 5, Deus Ex, and New Vegas. The Intel controller is really robust and good at handling life without trim. If you have much free space on there at all it shouldn't really get bad, but one option is to copy some very large files to the drives. The sequential file writes will level everything off. You could copy over some digital videos or perhaps .ISO files, then delete them (but you'd know if your performance was in the toilets, so save this for a rainy day). If you are running your OS on the drive, then it will get more beaten up without TRIM, and the X25 V is disadvantaged due to its size, but the Vs are pretty tough.
I've found a winner.
After much consideration, I think I found an optimal mix of controller, size, and NAND type.
Hopefully the drive will ship out tomorrow and I can start abusing it by Friday.
Hmmm...
Not quite.
I'm not really interested in sequential read speeds(For testing purposes), but it is kinda cool that Crucial is bringing out new FW.
Also, there is already a M4 in the test. And a C300. I had a M4 64GB too, but I put it in my Mom's laptop last time I flew home. It was genuinely, proper fast. It made me feel dumb for paying more to get a 510.
I'm not really sure I should give a hint... I don't want to jinx it... Once it ships out, I'll clue you in. I have faith in UPS and FedEx.
I would feel supremely idiotic to name the drive, and then the etailer screws up (had this happen recently).
I don't think it will be an issue, but just to be on the safe side I swear this oath:
If I don't get this particular drive by Monday (maybe Tuesday), I'm starting the test with my Intel 510 Monday night (maybe Tuesday??)
The 510 only has 593GB of host writes on it, but only had 498 before I started playing with AnvilPro's endurance testing yesterday. It's pretty fast, but since it's 120GB, it's only about the same as avg write speed : capacity as the M4 64GB.
The drive isn't even on the manufacturer's web site, isn't listed or is listed with a "usually ships in 2-4 weeks" at the few etailers that carry the series -- and I think the etailer that I placed the order with doesn't really have them. So I put in the order anyway, and I'll just have to wait and see. That's why I didn't want to mention it. I'm actually incredibly excited about this particular drive. I probably shouldn't have even mentioned it, but I couldn't help myself.
I think I might be addicted.
I will curl up into the fetal position and cry like a little baby if they don't have this drive sitting in their warehouse in West Nowhere, just waiting to get on the truck.
well this is piquing my interest for sure...cmon out with it! which drive?
@also...this quasi-failure with the intel has just really impressed me. much more impressed with that drive now than i would have been had there not been a failure!
I have my two drives limited to a 60GB array (so some reserve space there on that front), and its sitting with 10.5GB free space (trimmed down win7 install...I need to reinstall to be able to install SP1 one of these days +autodesk programs)
judging by a recent CDM 2.2 when comparing to virtually brand new I'm looking at...
sequential:-110MB/s reads, -8.37MB/s writes
512k:-33.9MB/s read, -25.98 MB/s write
4k:0.12MB/s read, -9.97MB/s write
one drive has 9710 power on hours, the other 9748 and 1.20TB and 1.32TB written respectively and most of those hours (give or take maybe 100) have been in a RAID array in win7
im sure there's also been a firmware revision since I installed my drives
I know the drives are pretty tough little cookies, I'm just now wishing I could get myself a third or fourth, lol
Would be good to know what the drive was.
286.11 hours
168.0275 TiB written
40.94 MB/s
MD5 ok
05: 0
B1: 83
E7: 25%
E9: 119616
EA/F1: 172864
F2: 256
m4 update:
518.3935 TiB
1688 hours
Avg speed 91.04 MiB/s.
AD gone from 58 to 54.
P/E 9085.
MD5 OK.
Still no reallocated sectors
Attachment 120054Attachment 120055
Kingston V+100
I'm still trying to figure out why it drops out, so the test is still halted.
Really impressive to see the Crucial M4 getting above 500 TB. Will see if it really beats the C300 etc. but I highly doubt it because the controllers are pretty much the same and the 34nm NAND is much better etc.
315.36TB Host writes
Reallocated sectors : 6
MD5 OK
32.93MiB/s on avg (49+ hours)
Speaking of the X25-E... here's my 32GB one with the most host writes. MWI of 90 still at 580TB written:
http://img3.fotos-hochladen.net/uplo...ym9c4t6qlx.jpg
M225->Vertex Turbo 64GB Update:
296.48 TiB (325.98 TB) total
882.9 hours
6475 Raw Wear
118.84 MB/s avg for the last 111.42 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) at 4
(Bank 6/Block 2406; Bank 3/Block 3925; Bank 0/Block 1766; Bank 0/Block 829)
Attachment 120089
WOW it really is impressive to see how much better SLC is compared to MLC ! The Intel X25-E lacks TRIM but has a WA of 1.1 and 50nm SLC NAND. A M4 has TRIM and WA of 1.1 and 34nm MLC NAND. Sandforce often has actual 0.6 WA which is very nice and TRIM too etc. Capacity also plays a role here. For example a 64GB X25-E will last twice as long as a 32GB X25-E !
Imagine the almost ideal scenario for SSD endurance ( which does not exist currently but the technology is there ) : 90nm SLC NAND, 256GB capacity ( tons of reserve space too ), WA of 0.6 with compression controller like Sandforce and TRIM too etc.
I can't.. bought it like this - just got in today... I wondered WTF the guy did with it as well.
That X25-E averaged ~31.7MiB/sec for ~7.5 months in real usage :eek: :lol:
m4 update:
526.6830 TiB
1714 hours
Avg speed 91.06 MiB/s.
AD gone from 54 to 49.
P/E 9227.
MD5 OK.
Still no reallocated sectors
Attachment 120099Attachment 120098
Kingston V+100
I'm still trying to figure out why it drops out, so the test is still halted.
wow looks like a server drive, bet it was used in a caching environment. Cache disks can take a beating, but damn that is alot of resets for a server scenario...interesting, a good mystery!
Quote:
Speaking of the X25-E... here's my 32GB one with the most host writes. MWI of 90 still at 580TB written
Quote:
Just out of curiosity, can you detail a little (if allowed) what kind of load do you have on that SSD? I am asking because on normal desktop usage and light servers I saw no more than 5-10TB of data per year until now.
Quote:
I can't.. bought it like this - just got in today... I wondered WTF the guy did with it as well.
Quote:
That X25-E averaged ~31.7MiB/sec for ~7.5 months in real usage
Quote:
wow looks like a server drive, bet it was used in a caching environment. Cache disks can take a beating, but damn that is alot of resets for a server scenario...interesting, a good mystery!
i'm following this thread with great interest, admiring the great job done by all participants and learning a lot.
With risk of being obvious i'd like to remember that with respect to Intel Toolbox Smart Attributes, some of them seem not to be very reliable, since there are old complaints about them in Intel SSD's Forum, like this one:
[QUOTENov 16, 2010 4:34 PM
intel G1 ssd reports weird unsafe shutdown cont][/QUOTE]
With me, one new G2, just after OS installation, toolbox was showing about 2 or 3 TB of host writes.
Keep up the good job.
Def. looks like your X25-E was used in an enterprise environment probably as a ZFS cache disk or for database work.
Looks like he won't budge. Why so serious :D ?
m4 update:
532.2283 TiB
1731 hours
Avg speed 91.09 MiB/s.
AD gone from 49 to 46.
P/E 9326.
MD5 OK.
Still no reallocated sectors
Attachment 120132Attachment 120133
Kingston V+100
Still not startet the test again. I'll continue to find whats wrong.
Never mind, I'm using teamviewer and will still update every day (after tip from Anvil :up:)Quote:
Next update will be 26 Sept. I'm going away on a job trip. If the drive fails when i'm away I'll get a timestamp from ASU when it happened.
No. If it writes "less than twice as fast" while being double the capacity, it should last LESS than twice as long...assuming it writes faster than the lower capacity model. If both models write at the same speed, then double the capacity should be about double the longevity (although free space and overprovisioning can make a difference).
C300 Update
367.1TiB host writes, 1 MWI, 6190 raw wear indicator, 2048/1 reallocations, 63.05MiB/sec, MD5 OK
Attachment 120137
SF-1200 nLTT Update
225.75TiB host writes, 164.688TiB NAND writes, 20 MWI, 2635 raw wear (equiv), wear range delta 3, 56.2MiB/sec, MD5 OK
Attachment 120138
I think the M4 will die soonish now.
That M4 isnt going anywhere for a long time.
Maybe upgrade the FW to be as realistic as possible on the M4 ?
M225->Vertex Turbo 64GB Update:
310.11 TiB (340.97 TB) total
914.08 hours
6730 Raw Wear
112.70 MB/s avg for the last 16.50 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) at 4
(Bank 6/Block 2406; Bank 3/Block 3925; Bank 0/Block 1766; Bank 0/Block 829)
Attachment 120154
I noticed he tried but maybe try again. I think the controller is very dodgy if new attributes come up without flashing new FW just by trying to flash it etc. and also why can you not go from 0009 to 0002 etc. and downgrade etc. Seems very strange to me.
Sorry to leave you guys hanging - that was kind mean of me, but I figured I'd get the shipping notice on my trip. I went on a trip to the Smokey Mountains for a few days, and just got back. But I did get shipping confirmation today that my testing drive shipped.
It's a Sandforce 2281 controlled Mushkin Chronos Deluxe, but in the super spiffy 60GB capacity (it just came out this week I guess, because it wasn't even listed in the 60GB capacity at Mushkin). It's equipped with 32nm Toshiba Syncronous toggle NAND and should be pretty fast. While I was gone, it showed up on the Mushkin website, which also sells direct. I was afraid that some of the sites were incorrect about the product listing or the availibility, because I wasn't aware that anyone was making these.
http://mushkin.com/Digital-Storage/S...CR60GB-DX.aspx
Mine shipped, but won't be here until Tuesday. I was hoping I'd have it on my door step when I got back, but my 3G cellular modem doesn't work way up there.
I wouldn't have even known to look for it, but I stumbled upon a website called FutureStorage.co.uk which sells and possibly rebrands/orders drives to their spec. I saw that they had distribution and manufacturing in North America, from a factory in Texas. I believe this factory also makes the Mushkin drives as well, so I looked to see if they made a Toggle nand SF in the 60GB capacity like FutureStorage does (which they did,). I thought a Toshiba equipped Sandforce drive would stand up really well with the drop down to 60GB.
Here's hoping it doesn't have LTT.
I tried this but no matter what I trick I used it just didn't want to update to 0009.
More bad news from me. AVG forced a restart of the rig today and now I can't reach it with teamviewer so we all need to wait for an update until 26th.
It's just the worst kind of luck that the update came the day after I'm gone for 10 days.....:shakes:
Because the rig was restartet ASU is not running so we will continue from my last results I guess.
I've noticed that one of my Intel drives jumped from 200 power on hours to 400... in about 3 hours. Luckily, the X25-E I got on Ebay only had 288GB of host writes... not that it matters. 600TB is really only like 2% of what an the drive could probably write if you look at Intel's spec vs. reality. I think my X25-Vs will last for 700,000GB of host writes, vs whatever Intel rated them for. I really, really think Intel low-balls PE cycles for all of their drives. For instance, I read a short article about Intel's forthcoming HET-MLCs...
http://techreport.com/discussions.x/21644
This is laughable. The author (I respect this site by the way) states that Intel claims a 320 series, 300GB drive is capable of writing a massive... 30TB in its lifetime? And that with 30x the endurance, 710 series 300GB drives should be capable of writing 1.1PB...
It's good for a laugh.
think of what the 710 could really do...jesus 30X (projected) of the results we have seen with the 320 would be simply insane...not to mention that the 320 is still running.
very thought provoking that they could achieve this type of performance/endurance with no over-provisioning dedicated to spare area, only as XOR.Quote:
Although Intel doesn't set aside any flash capacity as spare area to start, the 100GB drive does have more than 100GB worth of NAND chips onboard. The extra NAND capacity (Intel won't say exactly how much) is dedicated to a RAID-like redundancy scheme that calculates parity bits to protect against data loss due to unexpected flash failures.
also no mention of 25nm SLC anywhere even this "late" in the 25nm game...
Also at the same site, in a different article, is a discussion of Larsson Creek's successor which utilizes 25nm SLC.
http://techreport.com/discussions.x/21652
Ask and ye shall receive (some random PowerPoint slides from IDC).
What I found funny about the HET-MLC article was actually the fact that Intel states the write span of a 300GB 320 as only 30TB -- only 100 times it's capacity. That's like saying an X25-V can only write 4TB over it's life span. Talk about under-promising and over-delivering.
I can't wait to tear into this Mushkin. I'm not sure what SF does with 32nm toggle nand as far as sacrificing one whole die... I thought they were doing something different for the 60GB capacities. Maybe RAISE isn't enabled for the 60GBs? If it works as well as Intel's system (and is enabled on the Chronos Deluxe 60, then it could last...... a lot further than this "premium Toshiba Toggle NAND's" PE cycles would indicate. Not to mention, if your workload is at all dedupe friendly, you get sub 1 WA.
The Chronos Deluxe will be my first SandForce drive (all of my drives are Intel, Indilinx, or Micron controlled), but honestly, I'm pretty impressed with the new Vertex Turbo that came while I was away. I played with it today and it's pretty damn fast for a two year old drive with 50nm Samsung NAND, but the SF2281 controller could be a whole other level, even on the Intel 510, in terms of both performance and longevity with 32nm flash. It's assembled in the USA like OWC's Mercury Pro's. I've already set up my desktop for silent running and ran ASU on one of the Agilities for 8hs in preparation.
I can't wait.
wow! first mention of 25nm SLC ive seen yet :) havent had time to dig through IDF slides yet, guess i should give that a whirl :)Quote:
Ask and ye shall receive
That was the only one.
The next iteration of Larson will probably be around the same size with the same specs cause I don't see Intel making it cheaper than $120. I don't see them making it much more expensive though considering its supposed to be paired with SRT. They'll somehow manage to make it unattractive in relation to capacity and specs as to not steal their higher margin enterprise market share. If they could make it like the 32gb Es for $120, no one would buy anything else. I don't really believe in the HET stuff anyway, so at least a couple people would think an inexpensive SLC with TRIM is a good option. So that's my rational for not getting too excited about the next generation of 'Creeks. Hopefully they'll prove me wrong.
Those specs from Intel are most likely based on 4K random, full span, which is an entirely different scenario to what is being tested here.
@ Anvil: how about a version of the endurance test that runs 4K random, full span, just to give a comparison to what is being tested here? I’d be up for testing another SSD in that scenario.
Regarding the 710; spending $6.50/per GB when the only notable benefit is endurance is going to be a tough call regardless of application. Something that lasts that long, at that price, when SSD technology is still evolving doesn’t seem to make any sense at all. :confused:
That is what I was thinking all along. Companies are not stupid. They would not want to underestimate their products so badly. What we are doing here is just dumping a load of sequential data onto the SSD in a precise pattern. 4K random is totally different and much more stressing. As is secure erasing, which nobody thought would be good to test etc.
Just put one of these MLC 25nm drives like the Intel 320 into an enterprise server environment and see how long it really lasts. I bet it would be pretty close to the 30TB Intel specify it to last etc. and NOT 300TB that our tests have shown otherwise why would anyone buy SLC drives. Or think about cache drives. Why would Intel use SLC 34nm in the Larsen Creek if MLC was so good as we have shown here in these tests ? Does not make sense to me. Maybe I just don't get it but I think Ao1 is thinking the same thing.
Sure, our test is good for the "average" user at home that stores static data and mostly transfers sequentially but this cannot compare to enterprise usage with no static data and where all the drive capacity is used 24/7 for caching or similar 4K random intensive tasks. Hence, Intel's rating difference to what we saw here.
Consider it done, will have a look at it today, if it was to be close to "enterprise" workloads I'd say we need to agree on some QD.
I'll probably make QD configurable but in general we'd need to agree on some value.
edit:
@bulanula
Keep in mind that the drives we are testing are not Enterprise drives although Intel hints on OP-ing their drives sort of makes them usable in "business scenario's"
324TB Host writes
Reallocated sectors : 7 (we finally have some movement here and it happened during the last 68 hours :))
MD5 OK
Power on hours is now at 2995 and it has been writing continuously for 2860 hours.
Will have a look at random writes shortly, should be some TBs.
edit:
1,07TiB 4K random writes (1125478MiB)
I had expected more but this drive is not a great performer at random writes, would be interesting knowing what the 320 has generated (One_Hertz could you check what is logged :)), I expect it could be 2x or more.
I'm just about to start the Endurance test using a Force 3 120GB.
I've been preparing a bit so it's got ~6hour runtime so far, mostly endurance related.
I'll do some final test on the rig where the other drive is running and if all is well I'll start in a few hours.
Attachment 120180
Attachment 120181 Attachment 120182
Below are the Intel specs for enterprise. For client it is around 35TB (20GB per day *5 years)
@Anvil QD is tricky. As far as I can see Intel don’t mention the QD they use for their endurance specs.
Attachment 120183
Attachment 120184
Intel publishes lower numbers for 320 series because they want to sell 710 series to the enterprises at a much higher price. Look how good the table above makes 710 series look. Just put the number 500TB in the first table for 120GB drive and nobody will touch the 710 drive.
I wonder why nobody notices that after having seen nearly 1700 posts in this thread. And this thread just talks about MLC consumer grade drives!
@bulanula: There is no true 100% random 4KB write enterprise scenario. There are always sequential writes to ease the drive off. Enterprise filesystems (ZFS) as well as databases have tricks up their sleeves to convert some of those random 4KB writes into larger sequential writes. Note "some".
ive seen 32 in a few of their documents, i would need to dig them up though.Quote:
@Anvil QD is tricky. As far as I can see Intel don’t mention the QD they use for their endurance specs.
EDIT: that was with the x-25m, but im sure it is one in the same.
Enterprise applications are build to sustain the load while are kept on fast RAID HDD arrays which can barely provide 10-20MiB/s of random read/write. Now, usually when HDDs are replaced with SSDs, the next bottleneck becomes the CPU as data is fed very fast. Also most enterprise applications are optimized for reading/writing big chunks of data of at least 0.5-1MB (on a hdd, due to it's rotational latency, it takes around the same time to read/write sequential chunks with size between 4KiB and 1MiB). Making enterprise applications that are totally dependent on 4K random read/write is purely retarded. And as an example, Google is using a file system for their clusters which is optimized for files with an average size of 10-60MiB. Their operations are sequential in nature, as this is the best way to use the hardware. Even database engines are highly optimized as MySQL can use 64KiB chunks for read/writes.
Now, the worst possible scenario is indeed using a SSD as a cache, but also, keep in mind that the usage model for a cache is random writes + random reads where depending on the scenario, you might have a higher number of reads than writes which will decrease the average writing speed significantly to allow a recovery period for NAND cells. If real enterprise usage test usage is desired, then the SSDs should be tested in such a way that each page that was written should be also read at least once. Pure random writes on the entire space would be torture, not real life usage.
Even as a cache device it won't see the 4KB random writes that IOMeter unleashes on the drive. A typical block size for ZFS or Oracle database is much larger than 4KB. And then, caches are typically tuned to merge smaller writes into bigger ones because in-memory caches sit in front of it (you will see 128KB random writes instead of 4KB). And then, top that with the fact that if the caching algorithm is any good, your caches will be read more than written to.
The purpose of the 4k access is to emulate the worst case scenario imaginable. Not to represent any real life patterns, or even actual enterprise loadings. there are some that are dependent upon 4k however.
the reason ssd caching is tested with 4k is, again, due to it being the hardest to handle. If the solution can handle 4k well, it can handle much easier workloads as well.
It won't be a real life test and 4KB is rather small by today's standards, 8KB is about the smallest there is for the typical database (Oracle, MS-SQL) on top of this, there is a *lot* of sequential writes.
(there are of course other workloads besides databases but they will most likely have a much larger block size)
I'll make some options on how to write, 4KB, 8KB, ..., as well as more options on size. (including full span)
This type of test is highly unlikely in any environment though.
We already know what Intel data-sheets tells.
100% 4KB writes, 100% span.
G2
80GB = 7.5TB
160GB = 15TB
G3
40GB = 5TB
80GB = 10TB
120GB = 15TB
160GB = 15TB
300GB = 30TB
600GB = 60TB
i agree :)
some types of adjustable parameters would be great for this type of testing.
it would be interesting to see what the real numbers behind intels testing are, because previous documents show they measure up to 3 percent nand loss...of course the devices would be able to last much longer just by leveraging OP alone..
4K random full span is obviously not a realistic work load, but it does provide a way to compare directly with Intel’s specs. It would be interesting to see how that pans out on a 320 40GB drive. Exactly how far out would the specified 5TB value be? x 10? x 20? x 30?
What kind of sustained write speed could the 320 manage at QD 32? (With 4k full span random)
it will be very very slow actually. the write speed will slow down tremendously with full span.
The guys from INTEL are real engineers and are specifying the worst conditions, no real life conditions. Nothing wrong with that, but also some real world expectation should be published, otherwise "average Joe" will read the specs, will see 15TB and will be scared on how low are the numbers. That "average Joe" might be your boss to which you requested a SSD replacement for your desktop at work and these specs will be the best reason to deny your request.
I suppose that if this Chronos Deluxe does have LTT then I could switch to just 4Ks writes and pound it slowly. The only real desktop workload is actually using the drive for the next 12 to 20 years in a desktop. And that sounds more like a prison sentence.
The Force 3 is doing great so far.
This first ss shows 2.87 hours of writes w/Security Essentials enabled on the folder where writes are being performed.
Attachment 120189
This one show 2.87 hours of writes w/Security Essentials disabled on that same folder.
Attachment 120190
Attachment 120191
There drive is filled with W7 files (OS), one Virtual Machine, 22GB of mp3 files + some miscellaneous files (mostly incompressible), this leaves ~49GB of free space.
We'll see tomorrow if it can keep up the pace :)
I've been playing around some more with the 128GB Vertex Turbo I got in the mail early this week. A few days before that, I had also received a new 60GB Agility 60. Both purchased and received within one week.
Obviously, OCZ isn't making anymore of the original Vertex/Agility, so I figured both drives were leftover stock, stuck in a warehouse somewhere. I was reflecting on this today, and had this crazy idea.
Remember, earlier this year OCZ switched from their metal shell to cheaper plastic ones. So that would mean that these 2 drives should have been manufactured this year, probably in May.
What if OCZ is taking RMAs and returns, destructive flashing them, then dropping the PCB in new enclosures and selling them as new? Mind you, I did get pretty good deals on each, both were about $1 a GB. The metal enclosures' paint wore off easily, so it would be hard to recycle those. That's actually kind of disturbing, since they're you know, sold as new. You'd never have any way of telling how long ago the drive's PCBs were manufactured, and they could destructive flash and remove all the SMART attributes as well. The new plastic drive cases differ only by the sticker on the top, so if you have a functioning returned drive, why not put it in a new shell and resell it?.
Something like 10% of SSDs are returned in perfect working order, but lots of Indilinx drives locked up or got bricked -- both things that could be fixed with a D Flash.
I just thought it was a little suspicious. After all, I don't even think Samsung uses 5xnm Samsung NAND anymore. They have new versions of the 470, ones I believe use smaller process flash, and have slightly different model numbers.
I think it's a little fishy myself. Not saying that's what OCZ is doing, just that if they did, how would you know?
I have a vertex that i received from a RMA situation. It 'panic locked' and could only be broken free of this by means of using a firmware flasher that isn't generally released unless you have a specific problem.
However, these panic-release flashers are released regularly to people with problems, they talk about them all the time in the vertex area of their forums. Only given via pm or email whatever with support.
So they sent me a few of these flashers, as they match up to certain configurations with the type of nand on the vertex series, as there has been a few versions over the years.
When none of the versions worked on my drive, they happened to ask me directly if the drive was received from an RMA> when i told them that it had in fact been an return from a RMA drive, they informed me that drive there wasnt a flasher for, and it had to be sent back.
Struck me as odd then, and still now. I believe it was a refurb and something akin to what you are mentioning is the situation. Otherwise, why would only a drive that was received from an RMA return the only drive that they did not have a flasher for(or at least one they are willing to release)?
Maybe, but isn't this the purpose of this whole testing ? It is not really "real life usage" because you are not giving the NAND time to recover as would happen in real life ( people do turn off their computers at some point or another ). Maybe we should test just some using 4K writing to full span of the drive ?
I'm having some issues on the Kingston drive, it is dropped by the MB, just like earlier this summer.
(there were no errors on the drive, I've been suspecting issues with the MB though)
I'll move it to one of the Intel rigs and let the Corsair run on this rig, for a few hours that is.
The Corsair is still doing fine.
E6 100
E7 0
E9 5930
F1 7927
106.74 MiB/s on avg. (~15.5 hours)
So, it's closing on 8TiB host writes.
edit:
The Kingston V40 has had no issues since I moved it to one of the Intels, will have to find out if there is something going on with the workload on the AMD board, cooling should be OK and so I'm not sure what it could be.
(will check the temps though)