Page 3 of 5 FirstFirst 12345 LastLast
Results 51 to 75 of 108

Thread: SandForce "SF-22XX" issues and workarounds

  1. #51
    Xtreme Member
    Join Date
    Jun 2011
    Posts
    145
    Oh I didn't know the Mushkin drives used the toggle NAND as well. Guess you have ruled out that NAND then.

  2. #52
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by squick3n View Post
    Oh I didn't know the Mushkin drives used the toggle NAND as well. Guess you have ruled out that NAND then.
    Mushkin has the Chronos and the Chronos Deluxe. The Chronos line is asynchronous IMFT 25nm NAND. The Chronos Deluxe is 32nm Toshiba Toggle NAND. Both drives come in the same retail box and the labels on the drives look the same as well. Mushkin is the only company selling in America that has a 60GB Toggle NAND 2281.

    Mushkin Chronos, OCZ Agility3/Solid3, Corsair Force, OWC Mercury 6G = IMFT 25nm Asynchronous

    Corsair Force GT, OCZ Vertex 3, Older OWC Mercury Pro 6G, Kingston HyperX = IMFT 25nm Synchronous

    Mushkin Chronos Deluxe, Patriot WildFire, newer OWC Mercury Pro 6G, OCZ MaxIOPS = Toshiba 32nm Toggle NAND
    Last edited by Christopher; 10-10-2011 at 06:56 AM.

  3. #53
    Xtreme Member
    Join Date
    Jun 2011
    Posts
    145
    Thanks for the info. I really enjoy the performance of the MI, but if there are comparable drives out there w/o LTT, I might look in to those

  4. #54
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by squick3n View Post
    Thanks for the info. I really enjoy the performance of the MI, but if there are comparable drives out there w/o LTT, I might look in to those
    Have you been throttled with your Max Iops?

    I don't think it should ever be much of a concern unless you're writing.... I don't know exactly, but probably a few hundred GB a day. Maybe even more, but I don't think we have confirmation that the Max Iops or Vertex 3 have LTT to begin with. If you aren't having problems with you MI, I say leave it.

  5. #55
    Xtreme Member
    Join Date
    Jun 2011
    Posts
    145
    Oh I've had no problems. But I occasionally do some heavy video editing, and don't like the unknown nature of LTT.

  6. #56
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    As time goes on, you'll have less and less to worry about. If your system is always on but you only use it part of the day you should accumulate enough power on hours. It might not have it at all, and I think you just need to look at the smart data. If life curve stays at 100, you don't have LTT. Someone else like Ao1 might be able to verify this (I could be completely wrong).

  7. #57
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Well, it just rebooted.

    The interesting thing here is that it wasn't the Force 3 that disconnected, it was the boot drive (Force GT) and it is still running FW 1.3.

    So, a bit annoying as it was past 27 hours, the Force GT had been online for 72 hours when it disconnected.
    I had to cold boot as it wouldn't show up having performed several warm-boots.
    It's time to upgrade to Force GT to the latest firmware.

    I had hoped the Force GT would sort of confirm that it was stable (as it had been online for quite some time) but apparently not, at least not on the 1.3 FW.
    -
    Hardware:

  8. #58
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    I didn't know you were running both Corsairs on the same system, but I have a hard time just keeping up with one...

  9. #59
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The Force GT is just the boot drive, there's no testing except for mostly idling which looks to be a test on it's own.

    The Kingston (X25-V) is back online on the X58, the Force 3 has been offline while deciding on what to do next.
    -
    Hardware:

  10. #60
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    I knew you were not testing it, but I didn't know you had two SF2281s in one rig. I think idling is the opposite extreme of the endurance test anyway for SF2. On my system, I've been writing about .7 to 1.2 GB a day on the system drive. I was using my Intel X25-E, but cloned that to my Vertex Turbo 128.

    Incidentally, I've been wondering about SF 1 and 2. Since they don't have DRAM caches, they use nand on the drive. Maybe SFs aren't limited by the NAND endurance overall, but perhaps they keep wearing out the same NAND device from storing all the associated SF data. If it didn't adequately rotate where it stored all the data necessary to make SF work, maybe writing millions of files is more detrimental than just writes to NAND.

  11. #61
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Christopher View Post
    Incidentally, I've been wondering about SF 1 and 2. Since they don't have DRAM caches, they use nand on the drive. Maybe SFs aren't limited by the NAND endurance overall, but perhaps they keep wearing out the same NAND device from storing all the associated SF data. If it didn't adequately rotate where it stored all the data necessary to make SF work, maybe writing millions of files is more detrimental than just writes to NAND.
    I guess SF data is specific to a number of pages (like keeping parity data for 8 pages in a separate and complete page) so it should not matter too much the number of files. Also, most probably pages are grouped so writing pattern should also not impact the usage. Sandforce advertise the fact that internal data is spread thru all dies for better redundancy. It might be very sad to find out that this is not true
    Last edited by sergiu; 10-10-2011 at 04:00 PM. Reason: Wrong idea

  12. #62
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by sergiu View Post
    For a SSD, it should be no difference other than WA between writing one file and writing one million, as writing file system metadata is still just a write request, just like any other. It would be possible for a SSD to "know" about OS filesystem but that would be "suicide" as that model would be tied to a specific file system and thus to a specific family of OSes.
    The SF has to keep tables of information and I guess hash data, that gets written in conjunction with NAND writes. If it writes this info to the same area without rotating it, couldn't it wear out that nand from constant writing since it's not keeping this info in flash? Maybe I'm grossly overestimating the amount of data this is, but if it's always writing this information to the same physical nand device, maybe it could put disproportional wear on one nand device vs. the others - forget the file system, but maybe this data is a different amount based on the compressibility of the written data, or something. Maybe it's different for smaller capacity drives that don't have a whole die's worth of nand sacrificed.

    I'm not really smart enough to know much about that, but I wonder if SF drives might have additional reasons to not wear in the same fashion as non-SF controlled drives. So far, wear to non-SF drives has been pretty linear. Perhaps there is something else going on behind the scenes with SF that would make them not wear out in the same linear manner.

  13. #63
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    There are a lot of possible variations, indeed. Another thing is that we do not know for sure different internal details. We can only guess based on behavior but we might be far from truth. Now either way, the overhead cannot be so high to seriously impact the wear. At zero fill for example, it has a compression rate of ~13% and what I always assumed is that a big part of these percents are actually the overhead.

  14. #64
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    The Mushkin team just released some new standard SF firmware. 3.3.0, so I'd assume that other vendors will have it out as well. The poses a dilemma as I've just found stability.

    EDIT
    I can't tell you how irritating it is the SF fw releases can't have changelogs... even some really basic information would be helpful.

    Many users are going to want to try the new fw even if they don't have problems, but at least some of those people are going to get instability for the first time if history is any indication.

    Even Intel will tell you (sometimes) what the new FW is for... like adding TRIM, or the last 320 Intel FW to fix the 8MB Bug.
    Last edited by Christopher; 10-10-2011 at 05:38 PM.

  15. #65
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Christopher View Post
    Incidentally, I've been wondering about SF 1 and 2. Since they don't have DRAM caches, they use nand on the drive.
    I don't think that is true. They may not cache host data in DRAM, but I think that they keep some metadata in DRAM. That is similar to how the Intel 320 SSDs work -- they also do not cache host data, but they do have a small DRAM for holding metadata.

    By the way, remember that some of the higher end Sandforce SSDs have/had a supercapacitor for backup power. There would be little reason for a supercapacitor if there was no information stored in volatile memory.

  16. #66
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by johnw View Post
    I don't think that is true. They may not cache host data in DRAM, but I think that they keep some metadata in DRAM. That is similar to how the Intel 320 SSDs work -- they also do not cache host data, but they do have a small DRAM for holding metadata.

    By the way, remember that some of the higher end Sandforce SSDs have/had a supercapacitor for backup power. There would be little reason for a supercapacitor if there was no information stored in volatile memory.
    That's a very good point.

    With a 120GB Toggle Nand equipped 2281 you will end up with 111GB, with almost 17GB worth of over provisioning and one die given up for the RAISE scheme (and supposedly other SF related needs). It is said that the SF processor has some internal cache, but probably not very much, so who knows?

    I'm way more excited by almost hitting the 60hr mark with the Mushkin. If I can hit 75 or 100 hours I'm going to "undo" the changes I've made, and see if I go back to having crashes every 30 hours (or earlier if I want... I can get it to happen whenever I want, so long as I only want it to happen within 12 hours with MSAHCI, but there's no magic switch to make it happen immediately). 30hrs is with latest official Intel RST drivers for both 1155 motherboards and I have another one on the way to test with as well.
    Last edited by Christopher; 10-10-2011 at 07:23 PM.

  17. #67
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    I still maintain that there is something fundamentally flawed with the SF2281s, that all drives will exhibit the BSoD/disconnect issues. I have finally found a fix for the Mushkin, and I'm thinking about getting another SF2281 drive to test. But I would like one with a track record of producing BSoDs, and if I get another new one, I would expect that I could make all three drives (The Mushkin, a used drive with a BSoD record, a new retail model) crash or not if I wanted.

  18. #68
    Registered User
    Join Date
    Nov 2008
    Location
    Canada
    Posts
    60
    Can you tell what fix you applied ? I tried to find it but I couldn't be sure.
    Last edited by kensiko; 10-11-2011 at 06:01 PM.
    ---------------------------------
    Cooler Master HAF912
    Kingston Twister bearing 120mm fans
    Sunbeam Rheosmart, fans controlled with Speedfan
    Asrock Z68 Extreme3 Gen3, modded BIOS OROM 11.6
    2500K @ 4.5 GHz
    OCZ Vendetta 2
    Visionteck HD7850
    4 x 4GB Gskill 1600MHz 1.5V
    1680GB of SSD: Mushkin Chronos Deluxe 240GB, Sandisk Extreme 480GB, 2 x Mushkin Chronos 480GB RAID0
    LG 10x Blu-ray burner and Lite-On DVD burner

  19. #69
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by kensiko View Post
    Can you tell what fix you applied ? I tried to find it but I couldn't be sure.
    I'm still testing it to make sure it's actually a fix, and I'd like to test it some more before I tell everyone about it.

    So you didn't miss it, I just haven't told anyone yet. After some more testing, I'm going see if Anvil will try it out.
    After I make sure I'm not just imagining this, I'll tell everybody who'll listen.

  20. #70
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Kensiko,

    Are you having problems with a SF2281?

    If so, could you list the full system specs of the system that's experiencing the problems?
    I'm just curious, and it might be handy to have some people here with the problem in the future.

    Or were you just wondering what the hell I was doing? Because sometimes I wonder that myself...
    Last edited by Christopher; 10-12-2011 at 02:29 AM.

  21. #71
    Xtreme Member
    Join Date
    Dec 2004
    Location
    Romania
    Posts
    260
    My days of joy with SF 2281 go like this ( OCZ Vertex 3 120G to be more precise ) - the motherboard is ASUS P8Z68 Deluxe, BIOS 0706; all C-states are enabled ( not AUTO, but actually set to ENABLED ); TRIM is enabled
    - 2.11 ( came with 2.06, updated before installing Win on an ICH9R mobo via the Linux tool ) + IRST 10.6: first reboot about 1h after installing Windows ( while Win was installing updates); a BSOD about 6 hours later
    - 2.11 ( force-flashed with all the OCZ tweaks via the Linux tool - remove CMOS battery, remove drive power etc ) + IRST 10.6: crashed the next day when trying to boot
    - 2.09 ( flashed with all the OCZ tweaks via the Linux tool ) + IRST 10.6: crashed in about 20 hours ( soon after Win 7 started in the morning )
    - 2.06 ( flashed with all the OCZ tweaks via the Linux tool ) + IRST 10.6: crashed in about 26 hours ( soon after Win 7 started in the morning )
    - 2.11 ( flashed via the OCZ Windows tool ) + MsAHCI ( with the tweaks Tony specified to force the ports to internal ): solid freeze after 10 days ( no BSOD )
    - 2.13 ( flashed via the OCZ Windows tool ) + MsAHCI ( with the tweaks Tony specified to force the ports to internal ): running stable for 11 days 20 hours

    About the strange behavior of E9; I don't think it's actually a wrong counter; I let it do its thing for a few days and it finally managed to get to a value of F1 + 15; I run an ATTO and now I have E9 at F1 - 4; the daily "writes" ( as shown by SSD Life ) are 22 and as far as I remember ATTO writes around 20 GB; this means that the actual amount of data written to the NAND for ATTO was really small as you would expect it to be; my best guess at the moment is that FW 2.13 has changed the behavior of the GC to make it more aggressive in order to avoid the disconnects ( that generally seem to happen when the FW decides its time to do some housecleaning ).
    Last edited by scorp; 10-13-2011 at 08:07 AM.

  22. #72
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    If you've got your drive running stable, then you should never touch your system again.

    I had the worst luck with MSAHCI, but I'm currently running it now to see if it's stable with my Mushkin.

  23. #73
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    If you've got your drive running stable, then you should never touch your system again.

    I had the worst luck with MSAHCI, but I'm currently running it now to see if it's stable with my Mushkin.

  24. #74
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The Force GT has been running now for 82 hours and counting.
    I moved the Kingston (X25-V) to the Z68 as I needed this rig for other tasks.

    I'll keep the Force GT running as longs as I'm able to, hopefully throughout this weekend.

    I am not sure if it is the firmware upgrade or the Power saving schemes that has helped on this drive.
    -
    Hardware:

  25. #75
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The Force GT is still doing fine, up time, 4 days 10 hours 43 minutes
    (almost 107 hours)
    -
    Hardware:

Page 3 of 5 FirstFirst 12345 LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •