Thanks for the help, Anvil.
My estimate was based on erase count numbers, and I was only off by about 300GB.
Actual count: 50,769GiB, 49.58TiB
Thanks for the help, Anvil.
My estimate was based on erase count numbers, and I was only off by about 300GB.
Actual count: 50,769GiB, 49.58TiB
Had a Crucial C300 256GB SSD that just died a week ago. Purchased back in March of 2011. Gave me a good 11 months.
Bought a Crucial M4 256GB SSD to take its place. So far, so good.
\Project\ Triple Surround Fury
Case: Mountain Mods Ascension (modded)
CPU: i7 920 @ 4GHz + EK Supreme HF (plate #1)
GPU: GTX 670 3-Way SLI + XSPC Razor GTX670 water blocks
Mobo: ASUS Rampage III Extreme + EK FB R3E water block
RAM: 3x 2GB Mushkin Enhanced Ridgeback DDR3 @ 6-8-6-24 1T
SSD: Crucial M4 256GB, 0309 firmware
PSU: 2x Corsair HX1000s on separate circuits
LCD: 3x ASUS VW266H 26" Nvidia Surround @ 6030 x 1200
OS: Windows 7 64-bit Home Premium
Games: AoE II: HD, BF4, MKKE, MW2 via FourDeltaOne (Domination all day!)
New drive being endurance tested. (Patriot Torqx 64gig)
Controller: Phison PS3105-S5
NAND: Toshiba 32nm (probably 5000 cycle rated)
Cache: 128meg DDR
Impressions: Drive has great difficulty maintaining high performance under heavy load. Either I am seeing huge write amplification during the endurance test, or (and probably more likely), the drive simply isn't erasing blocks in a timely manner. This drive may need a lot of idle time to perform decently.
Benchmarks after inital wear in: (4th run of crystaldiskmark or so)
Day 0 SMART values:
I am pretty sure attribute AA represents bad block counts. I will keep people updated when that changes.
I am also fairly sure AD represents wear levelling. I'm not too sure what the numbers mean ... but the 2nd and 3rd raw numbers of this attribute increment individually.
Drive hours: 126
GB written: 6897.82
Avg MB/s: 16.93
Bad blocks: 83
Wear cycle counters: 0/435/673 (100 normalized)
@canthearu
A few missing details,
Size of static data
OS and platform
Date started
Over-provisioning?
-
Hardware:
I was actually pretty surprised by the performance of the Torqx. The 3016 Phison is probably pretty similar to the 3105 in the Torqx, but I'm 99.4% the firmware of the 3016 uses block level mapping and not page level mapping. If you look at the 512K results in CDM, the 3016, when fully populated, is only gonna do like 12MB/s. The 3105 is just much, much better.
And it somewhat makes sense -- the Phison was surely a CF/SD/etc controller initially, optimized for taking pictures at least 1MB in size.
The Patriot PS-100 I have uses the 3016 with 32nm Toshiba, 32gbit devices. It does get TRIM, but not NCQ. And it sucks. A lot. You can't even complete a run of AS-SSD.
The MyDigitalSSD Bullet 128GB mSATA uses the Phison, and it's quite good. I just don't think it holds up well over time.
Well, one thing I do notice with the Philson controller is that there is an enormous gap between factory and steady state performance ... however I'm unsure if the controller would eventually garbage collect and restore decent performance from steady state if left alone for a while. Post Secure-Erase, the drive is quite fast .... but very soon degenerates back to the slow steady state. I might turn off the endurance test for a night soon and see.
All I can see now is the huge gap between the Philson and the other SSDs I own. A sandforce drive will happily accept 60meg per sec or more (depending on drive/NAND) of writes for as long as you want without performance significantly falling away.
Edit: Yep, I do get your point about Philson controllers being generally for portable media .... my USB 3.0 memory stick uses a Philson controller, and for a USB stick, it is quite excellent (where write speeds can be as awful as 2-3 meg per second if you don't pay attention to what you buy)
Last edited by canthearu; 02-21-2012 at 03:52 PM.
Yeah, I wouldn't bother with seeing if the Phison is going to recover... Just keep testing it, unless it goes down to single digits. After a while it should find an equilibrium of sorts.
My 32GB Patriot PS 100 only manages 3 or 4 MB/s in the test.
anyone else notice this: http://cseweb.ucsd.edu/users/swanson...BleakFlash.pdf
Fast computers breed slow, lazy programmers
The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
http://www.lighterra.com/papers/modernmicroprocessors/
Modern Ram, makes an old overclocker miss BH-5 and the fun it was
Last edited by Christopher; 02-21-2012 at 09:52 PM.
Future drives won't be all bad ... eventually the increased density will lead to increased sizes, which will largely offset the reduction in erase cycle endurance. At the moment, most of the increased density is being used to reduce prices, but we can only go so far in that direction while maintaining performance/endurance.
I certainly won't be buying a TLC NAND based drive (unless it is to murder in an endurance test ) Giving up way too much endurance for far too little space.
Yesterdays update:
Kingston V+100
And it dropped out again.....
Intel X25-M G1 80GB
359,6176 TiB
21097 hours
Reallocated sectors : 00
MWI=103 to 102
MD5 =OK
43.62 MiB/s on avg
m4
446.3624 TiB
1761 hours
Avg speed 72.43 MiB/s.
AD gone from 106 to 100.
P/E 7699.
MD5 OK.
Reallocated sectors : 00
1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
3: Asus U31JG - X25-M G2 160GB
Kingston SSDNow 40GB (X25-V)
732.83TB Host writes
Reallocated sectors : 05 23
Available Reserved Space : E8 99
POH 6591
MD5 OK
33.53MiB/s on avg (~143 hours)
-
Hardware:
Quite interested as to how that C300 256 GB died, UrbanSmooth.
Yeah, I saw the bleak NAND news. I have been saying this all along. You can only scale NAND to a certain level but we need a different storage technology to replace it pretty soon in the next few years, I think.
TLC NAND ? No, thank you !
People get too worried about a SSD wear.
Even with only 1000 write cycles (some mythical smaller NAND MLC, say 12nm, 2 generations ahead of current tech), a 1TB SSD would still be fairly midrangish and still handle almost 1 PB of writes before reaching end of NAND life.
Anvil,
How important is the composition of the static data? Should we try to standardize the amount of static data per capacity? Could ASU be modified to generate the static files automatically based on a scale?
That's not a bad idea.
#1 should always be OS files as in a copying i.e windows files as using real files is more fair vs compressing controllers
#2 could be generated to a fixed % of the total capacity and can be a mix of "compression levels"
-
Hardware:
Well, I was thinking OS files could be simulated with ASU in the endurance or settings tabs... it could just fill x amount of capacity with "OS like" files in terms of size, number, and compressibility with the press of a button. And it would just do a standard percentage (by default, other % options too, maybe) of the drive's capacity. For example, the 16GB MTRON would have 1/4 the simulated OS files of the 64GB Samsung. Since ASU can already generate files of various compression levels and sizes, I was thinking it might not be that difficult to implement... but I'm not really qualified to answer that.
In that way, all new drives could get on a standard static data regimen.
Drive hours: 173
GB written: 9488.36 (9.2662 TB)
Avg MB/s: 16.42
Bad blocks: 83
Wear cycle counter: 0/596/911
Bookmarks