http://www.xtremesystems.org/forums/...=1#post4854810
The short version being that I am using a kernel level module to continuously write sectors to SSDs and read them back to check for errors. Throwing failures implies that a write/read failed. (aka the data read does not match the data written) All sectors received an equal number of writes. Once 90% of sectors fail, the drive is considered failed.
All drives are all receiving an exactly equal distribution of writes at a constant speed of 50MB/s
SMART errors are not even looked at.
A failed sector is one that is unable to have a successful write and read operation after 200 attempts to write to the sector.
A failure is that the data read from the sector is not the same as data written to the sector.
The Intel drive was classified as failed the instant that 90% of all of the drive's sectors have failed.
Closer analysis shows that less than 1% of the data written to failed sectors matches what was actually written and that errors tended to start rapidly collecting near the end of its life. It performed quite well, until the first sector failure then the drive died after a mere couple more days of testing. The second drive is currently at 42% failed and is expected to die by tomorrow night.
Bookmarks