Results 1 to 25 of 108

Thread: SandForce "SF-22XX" issues and workarounds

Hybrid View

  1. #1
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Christopher View Post
    Incidentally, I've been wondering about SF 1 and 2. Since they don't have DRAM caches, they use nand on the drive. Maybe SFs aren't limited by the NAND endurance overall, but perhaps they keep wearing out the same NAND device from storing all the associated SF data. If it didn't adequately rotate where it stored all the data necessary to make SF work, maybe writing millions of files is more detrimental than just writes to NAND.
    I guess SF data is specific to a number of pages (like keeping parity data for 8 pages in a separate and complete page) so it should not matter too much the number of files. Also, most probably pages are grouped so writing pattern should also not impact the usage. Sandforce advertise the fact that internal data is spread thru all dies for better redundancy. It might be very sad to find out that this is not true
    Last edited by sergiu; 10-10-2011 at 04:00 PM. Reason: Wrong idea

  2. #2
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by sergiu View Post
    For a SSD, it should be no difference other than WA between writing one file and writing one million, as writing file system metadata is still just a write request, just like any other. It would be possible for a SSD to "know" about OS filesystem but that would be "suicide" as that model would be tied to a specific file system and thus to a specific family of OSes.
    The SF has to keep tables of information and I guess hash data, that gets written in conjunction with NAND writes. If it writes this info to the same area without rotating it, couldn't it wear out that nand from constant writing since it's not keeping this info in flash? Maybe I'm grossly overestimating the amount of data this is, but if it's always writing this information to the same physical nand device, maybe it could put disproportional wear on one nand device vs. the others - forget the file system, but maybe this data is a different amount based on the compressibility of the written data, or something. Maybe it's different for smaller capacity drives that don't have a whole die's worth of nand sacrificed.

    I'm not really smart enough to know much about that, but I wonder if SF drives might have additional reasons to not wear in the same fashion as non-SF controlled drives. So far, wear to non-SF drives has been pretty linear. Perhaps there is something else going on behind the scenes with SF that would make them not wear out in the same linear manner.

  3. #3
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    There are a lot of possible variations, indeed. Another thing is that we do not know for sure different internal details. We can only guess based on behavior but we might be far from truth. Now either way, the overhead cannot be so high to seriously impact the wear. At zero fill for example, it has a compression rate of ~13% and what I always assumed is that a big part of these percents are actually the overhead.

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •