Lets hope so :)
SSDLife is just reading value 202
Printable View
Lets hope so :)
SSDLife is just reading value 202
I know... just really not applicable.
I'm really, really impressed with it's performance.
EDIT
I emailed someone at OCZ to ask what the deal was with my brand-new Vertex Turbo 120. I asked if it was possible for a drive manufactured in the last few months to have 50nm Samung flash (if that was in fact what it was using). I posted the identity tool results of 081102. I said I had bought the drive a little over two weeks ago new from an etailer. It had the pre-May plastic chassis (doesn't fit in many laptops as the case dimensions are too large).
They sent me the datasheet for some Samsung NAND and replied, "hope this helps".
I'm not sure it does... on the bright side, it's nice that I can just ask random questions to people at OCZ.
Mushkin Chronos Deluxe 60 Update
05 2
Retired Block Count
B1 16 (Down from 21)
Wear Range Delta
F1 113531
Host Writes
E9 87533
NAND Writes
E6 100
Life Curve
E7 55 (Down from 60)
Life Left
Average 127.02MB/s Avg
(up from 124.95)
261Hours Work (22hrs since the last update)
Time 10 days, 21 hours
Last 23.64hrs on 6gps port (Biostar TH67+)
11GiB Minimum Free Space 11500 files per loop, 12.9 loops per hour
SSDlife expects 14 days to 0 MWI
Attachment 120781
Hi Johnw, have you tried reading from the Samsung recently? I’m not 100% sure but I believe current SSD’s use dynamic wear levelling rather than static wear levelling. Dynamic wear levelling excludes static data, which mean that the NAND with static data on your SSD would have no wear. It would be interesting to run the endurance test on another Samsung without static data to see how much longer it would last for.
After about 30 hours of consecutive running, the Mushkin disconnected -- but I actually think it was something I did. I ejected a USB HDD enclosure a few hours before, and then disconnected it. I left for a few hours and when I came back to the system CrystalDiskInfo wasn't running anymore. I tried relaunching it several times. After a few seconds it popped up while the Mushkin disconnected. When I had ejected the USB drive earlier, it didn't fully dismount which CDI doesn't like. Somehow in that condition, trying to relaunch CDI may have done it. What's weird is hotplugging is disabled in the UEFI, but RST still acted like hotplugging was enabled. So I removed the RST drivers, reverted to MSAHCI, and registry hacked the two SATA III ports to internal only. The disconnect could have been nothing more than coincidence, but I'm not so certain. Of course, the two other SSDs didn't go anywhere -- only the SF drive.
Kingston SSDNow 40GB (X25-V)
367.12TB Host writes
Reallocated sectors : 10
MD5 OK
35.60MiB/s on avg (~16 hours)
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 51 (Wear range delta)
E6 100 (Life curve status)
E7 75 (SSD Life left)
E9 103714 (Raw writes)
F1 138188 (Host writes)
107.07MiB/s on avg (~17 hours).
power on hours : 400
Both are running off the ASRock SnB Z68 rig.
It looks like I had forgotten to disable Security Essentials scanning on the X58, so, it could have had higher throughput.
(will test later)
@christopher
Most of the disconnects have happened in conjunction with me checking the status or some other heavy I/O (when using the computer)
It has happened more than once so I'm not sure what it could mean. All power savings are on on my systems, I'd be willing to try disabling those features if necessary.
Anvil,
I'm starting to think the drives either like your motherboard or they hate it. End of story.
One consideration is the fact that I used the OS installation from the DP67BG mainboard. Windows 7 doesn't protest much (Office needs reactivation though), but I'm just running out of ideas. I'm running it with MSAHCI with this H67 for a while. I don't think the disconnect I had was a coincidence, but I'm just going to try it the H67 msahci combo for a little while. It didn't work very well with the other board and msahci is 6mb/s slower avg as well. I'm pretty sure I caused it, but the drive is just really "sensitive". I'd had another similar circumstance with the other board too. I'll take a crash every thirty hours over every eighteen, but I won't be happy about it.
M225->Vertex Turbo 64GB Update:
420.32 TiB (462.14 TB) total
1149.38 hours
8760 Raw Wear
117.04 MB/s avg for the last 15.32 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) at 6.
(Bank 6/Block 2406; Bank 3/Block 3925; Bank 0/Block 1766; Bank 0/Block 829; Bank 4/Block 3191; Bank 7/Block 937)
Attachment 120795
Yes, I tried and failed, then had a small catastrophe.
The Samsung met its "write death" on Aug 20, so I figured I would check it again for readability on September 20. I plugged the drive in and powered up my computer, but neither Windows nor the BIOS could see the Samsung SSD. I tried rebooting a couple times to no avail.
Then I powered down and disconnected the SSD, and brought it to another computer with an eSATA port and an external SATA power connectors. Disaster! When I went to plug in the power to the SSD, my hand slipped and bent the connector sideways, snapping off the plastic ridge of the SATA power connector. The metal pieces are still there (and still soldered to the PCB), but they are not stabilized by the plastic ridge. I found that it is still possible to get the SSD to power up by working a SATA power connector onto the metal pieces at the right position (they have a warp/bend to them that actually helps a little), but I am not certain that they are all making contact. But it is enough that when I "hot plug" the SSD, Intel RST notices something, although it never manages to mount the SSD.
I've been contemplating trying to repair the connector, but I have not yet come up with a good plan. Possibly I can super-glue the plastic ridge back on, but it is going to be difficult to get it lined up properly I think (it is in two pieces). I'm also thinking about trying to solder another SATA power connector on (if I can salvage one from a dead HDD), but there is a lot of solder there and if I get it hot enough to desolder, I am worried I might disturb some of the other components on the SSD PCB. So I haven't done anything yet.
Actually, if anyone reading this is experienced at this sort of thing, and would like to contribute to this thread, I'd be happy to send the SSD to you for repair and then you could keep it (if you are willing to try the read-only tests yourself), or send it back, whichever works best for you.
Johnw - I have a pretty high end forensics/data recovery lab over here :). A sata connector is very easy for me to repair. Furthermore, I actually have the ability to take the NAND chips right off the Samsung and try to read them directly with a specialized device to see how bad it is :). I will be doing that to my Intel when it finally dies.
I am in Canada though.
I'm not convinced about that, it's more like some SSD's are OK and some are not. (could be a combo of course)
My 240GB SF-2281's have never caused BSOD's, just the 60GB Agility and the 120GB Force 3.
(not sure about the Force GT 120GB, it might have had issues)
None of the 240GB drives have been used in Endurance testing though.
---
Kingston SSDNow 40GB (X25-V)
368.10TB Host writes
Reallocated sectors : 10
MD5 OK
34.43MiB/s on avg (~25 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 52 (Wear range delta)
E6 100 (Life curve status)
E7 74 (SSD Life left)
E9 106187 (Raw writes)
F1 141480 (Host writes)
107.02MiB/s on avg (~25 hours).
power on hours : 409
Although the Samsung performed admirably I can’t help think that it should have flagged up a warning (via SMART) once a critical endurance threshold had been reached, which then switched the drive to read only after a warning period. At least it would then have failed gracefully.
According to JEDEC218A “The SSD manufacturer shall establish an endurance rating for an SSD that represents the maximum number of terabytes that may be written by a host to the SSD” It then outlines integrity conditions that the SSD must retain after the maximum amount of data has been written:
1) The SSD maintains its capacity
2) The SSD maintains the required UBER for its application class
3) The SSD meets the required functional failure requirement (FFR) for its application class
4) The SSD retains data with power off for the required time for its application class
The functional failure requirement for retention of data in a powered off condition is specified as 1 year for Client applications and 3 months for Enterprise (subject to temperature boundaries).
I’m really not sure why the MWI appears to be so conservative. Does it really represent a point in time when the endurance threshold to maintain integrity (according to JEDEC specs) has passed? The Samsung wrote over 3 ½ times the data required to expire the MWI. Are you really supposed to throw it away when the MWI expires?
It will be really interesting to see what One_Hertz can uncover on the condition of the NAND.
Anyway I came across an interesting paper from SMART Modular Technologies. This is the second time I’ve seen compressibility referred to as data randomness. Anyone know the issues related to why randomness of data is linked to compressibility?
All compression relies on finding some sort of pattern in the data, usually various kinds of repetition. Random data, by definition, has no pattern. Therefore, truly random data cannot be compressed.
Also, data that has already been highly compressed will no longer have patterns that can be exploited for further compression. That is what they mean by high entropy.
Todays update
m4
597.4063 TiB
2190 hours
Avg speed 88.82 MiB/s.
AD gone from 13 to 09.
P/E 10423.
MD5 OK.
Still no reallocated sectors
Attachment 120820Attachment 120821
Kingston V+100
111.9978 TiB
441 hours
Avg speed 77.11 MiB/s.
AD gone from 93 to 83.
P/E ?.
Attachment 120818Attachment 120819
Thanks for that John. So it looks like the XceedIOPS SSDs employ compression techniques. I wonder how SMT compression compares to SF on a like for like basis.
1. Mixed random write workload—Medium degree of compressibility, write operations aligned on 4K boundaries, random starting LBAs. With a 28% over‐provisioned XCeedIOPS SSD, the average Write Amplification is approximately 1.0.
2. Database write workload—Highly compressible data, write operations aligned on 4K boundaries, random starting LBAs. With a 28% over‐provisioned XCeedIOPS SSD, the average Write Amplification is approximately 0.75.
3. Video Server workload—Minimal compressible data, write operations aligned on 4K boundaries, random starting LBAs. Represents a generic worst‐case write workload. With a 28% over‐provisioned XCeedIOPS SSD, the average Write Amplification is approximately 4.0.
With regards to the Samsung, surely only enough blocks have to be bad that the drive can't enforce it's own ECC or data integrity scheme. Not every block or even 25% of blocks could be bad... right?
Anvil,
I have to think that a lot of SF2281 drives are just BSODs waiting to happen... I'm not sure how it is that some drives seem to have problems, but endurance testing seems to tease it out. I wouldn't be at all surprised to learn that SF just can't handle days on end of endurance loads, regardless of motherboard. However, I've seen marked improvement with the H67, if not complete rock-solid stability. With a normal desktop load, some motherboards and drives just don't work well together.
I really want another Mushkin Chronos Deluxe to play with, but I'm running out of systems to use in such a small space. I'd need a bigger apartment for another system. Might be worth it though.
The Force 3 disconnected 35 minutes ago and it was one of those times where it left the system completely frozen. (it's the 2nd time iirc)
(it froze in the middle of a loop)
So, it does not look like the new firmware has made any difference so far.
I'll probably secure erase it one of these days.
@christopher
Yeah, you better get a bigger apartment :p:
It's hard to tell, if the user pattern is to restart every day one might not get hit by the issue at all, too many factors.
In my case most disconnects are somewhere between 24-35 hours.
I can safely say that the freeze is easily reproducible on several systems.
There are still a few more options to try in my case, like disabling power saving, will give it a try.
Anvil, try animal sacrifice first -- power saving options second...
:)
Someone mentioned Voodoo science on the OCZ forums, so, still a few more options left. :ROTF:
You have to cut the head off the chicken very carefully...
I've been reading the OCZ forums frequently of late, including the "voodoo magic" thread.
Goat sacrifices aren't recommended with 3.20/2.13 FW... don't forget to properly clean the motherboard's aura by burning sage first, but only on the vernal equinox.
lol. we will live to see it fixed, or recalled.
hopefully.
OCZ is the poster child for spontaneous bluescreen interruptus. Only because its lonely at the top -- they must sell more 2281s than every one else put together. There won't be a recall because no one knows why it happens. As long as Sanforce can lay this at the feet of Intel, nothing is going to happen. The fact that other drives don't have the same problem is further indictment of Sandforce.
I still want another one....