Page 110 of 220 FirstFirst ... 1060100107108109110111112113120160210 ... LastLast
Results 2,726 to 2,750 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #2726
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Oh.

    That makes me sad.

    You could try d flashing it. The 1.7 dflash is public on the OCZ forum.

  2. #2727
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    @ Anvil. I agree. For specific applications SF based drives are a great solution. The downer is that those specific applications usually operate in an environment where reliability is of critical importance. SF drives suffer in that regard, not so much due to SF but due to the limited ability of SF vendors to do their part of the validation processes.

    My SF2xxx is also dead by the way. It clapped out over the weekend.

  3. #2728
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    MD5 test on static data just hangs at the 2.6 GiB point of 32.6 GiB total. I''m leaving work in about 90 minutes, so I'll let it run overnight to see what happens. CDI C6 "Total Count of Read Sectors" keeps ticking upwards, just slowly.
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  4. #2729
    Xtreme Member
    Join Date
    May 2009
    Posts
    201
    @bluestang: D-flash it. Vertex drives do have propensity to get into this mode. If D-flash doesn't help, then its really dead! 900TB is nothing short of a miracle anyway.

  5. #2730
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Quote Originally Posted by Christopher View Post
    Oh.

    That makes me sad.

    You could try d flashing it. The 1.7 dflash is public on the OCZ forum.
    Quote Originally Posted by devsk View Post
    @bluestang: D-flash it. Vertex drives do have propensity to get into this mode. If D-flash doesn't help, then its really dead! 900TB is nothing short of a miracle anyway.
    If the MD5 test is still not finished when I get back in tomorrow morning, then I will try and D-Flash it to 1.7.

    EDIT:
    Update: MD5 test finally finished and test on 32.6 GiB static data failed.
    Last edited by bluestang; 11-21-2011 at 09:48 AM. Reason: MD5 results
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  6. #2731
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by johnw View Post
    Not all motherboards have the problem with lower 4KB random reads when processor C-states are enabled. But I'm not sure which motherboards are which.

    Also, remember that CDM sometimes gives strangely high results for 4KB read on Sandforce SSDs.

    What does AS-SSD measure?
    I'll say that it's the MB's using the PCH that are mostly affected. (I'll do some tests on the ICH to confirm)

    CDM is not using threads and I can't find anything wrong with the code.
    ASU and AS SSD uses threads and should be close but it's plain to see that AS SSD uses a different approach when measuring 4K in particular.
    (AS SSD takes forever on HDD's and uses multiple files when testing QD, ASU and IOmeter uses a single file for QD testing)

    As a result of the different approach they will never compare 100%, they should be relatively close, and they are close most of the time.
    -
    Hardware:

  7. #2732
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Christopher View Post
    I added AS SSD results above. I don't think the Chronos' 4K reads were particularly high with CDM, but AS SSDs seem excessively low. My M4 only scores about 12MBs reads @ 4K with AS SSD.
    LOL. Anytime you see 4KB random reads over 30MB/s, that is quite high. The only SSDs I have seen achieve that level consistently are Crucial C300s and I believe some Indilinx (well, iodrives are much higher, but I am talking normal SSDs).

    For some reason I do not understand, CDM shows anomalous 4KB read results on Sandforce 22XX SSDs. If you use AS-SSD, IOMeter, or ASU (incompressible data), they will usually measure in the low 20s MB/s. But the same Sandforce SSD on CDM often measures low-to-mid 30s MB/s. So it is CDM that is the anomalous one for 4KB random read tests.

    If you disable CPU C-States, you will probably see your m4 4KB random read result go up to low-20 MB/s.
    Last edited by johnw; 11-21-2011 at 10:22 AM.

  8. #2733
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    CDM is not using threads and I can't find anything wrong with the code.
    Right, we have discussed this before. No one has an explanation for it, but many SF22XX SSDs get low-to-mid 30s MB/s 4KB random reads on CDM, but low 20s MB/s random reads with AS-SSD, IOMeter, ASU.

  9. #2734
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    For client applications you get a great boost from the spare area that is made from the OS and application installs, which saves around 4GB of NAND writes. After that however savings start to evaporate. We know that SF can easily compress zeros, but they struggle to compress anything else in client based applications. For sure SF cannot compress anything close to the theoretical compressibility of data in client applications with a low QD, so I would argue quiet strongly that the theoretical compressibility of application data is nothing like what can be achieved in real life. No-one (to my knowledge) using SF drives for normal client based activities has been able to demonstrate a significant difference between host and NAND writes.
    Right. For flash longevity, Sandforce compression offers almost no benefit, since as you say, only the OS install and application installs have data that the Sandforce SSDs can compress. Most user data saved to the SSDs after the initial installs will be hardly compressed at all. And since any SSD used to write hundreds of TBs is unlikely to be installing the OS over and over, it is likely that most of that data written will not benefit much from compression.

  10. #2735
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by bluestang View Post
    Update: MD5 test finally finished and test on 32.6 GiB static data failed.
    Do you still have the original file? if yes, then could you do a bit to bit compare? I would be interested in how many mismatches are and if there are inside a single page or spread across multiple pages

  11. #2736
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by sergiu View Post
    Do you still have the original file? if yes, then could you do a bit to bit compare? I would be interested in how many mismatches are and if there are inside a single page or spread across multiple pages
    Also it would be good if he can clarify whether he was able to compute the MD5 checksum at all, or whether the failure was due to part of the file being unreadable.

  12. #2737
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Kingston SSDNow 40GB (X25-V)

    491.71TB Host writes
    Reallocated sectors : 05 12
    Available Reserved Space : E8 99

    MD5 OK

    34.71MiB/s on avg (~33 hours)

    --

    Corsair Force 3 120GB

    01 88/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 52 (Wear range delta)
    E6 100 (Life curve status)
    E7 10 (SSD Life left)
    E9 406317 (Raw writes) ->397TiB
    F1 540818 (Host writes) ->528TiB

    MD5 OK

    106.34MiB/s on avg (~33 hours)

    power on hours : 1565

    B1 is down again from 53 to 52.
    -
    Hardware:

  13. #2738
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    You know, I wonder how much that M225 actually had written to it before it was refurbished and repackaged. The WA of earlier Indilinx FW was pretty high, so it could have a significant amount of writes on it before it was sent to Bluestang and became a M225 Turbo Edition thanks to a little flashing.

  14. #2739
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    Sad to see the Indilinx die too. So far none of the drives were able to be read-only after they failed. So much for the "SSD do not wear out and you don't lose data it just becomes read only" myths. I bet the C300 would have made it to 1 PB easily if not more.

    BTW, does anybody know what happened to Vapor and his charts and his C300 ? Anyone got a C300 to prove my point ? Thanks !

  15. #2740
    Xtreme Enthusiast
    Join Date
    Jun 2011
    Location
    Norway
    Posts
    609
    Todays update:
    Kingston V+100
    267.6568 TiB
    1103 hours
    Avg speed 25.52 MiB/s
    AD still 1.
    168= 1 (SATA PHY Error Count)
    P/E?
    MD5 OK.
    Reallocated sectors : 00


    My write speed is still low. I've tried to format and let trim work and just deleting all files before I copy them back but with no luck. I'm away for over a week so my only options is through teamviewer.

    Intel X25-M G1 80GB
    75,8924 TiB
    19058 hours
    Reallocated sectors : 00
    MWI=110 to 92
    MD5 =OK
    49.23 MiB/s on avg


    @bluestang
    There is a wave of malfunction ssd's right now. The M225>Vertex Turbo gave it a good run for the money
    1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
    2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
    3: Asus U31JG - X25-M G2 160GB

  16. #2741
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Hmm. I think we need to have a discussion concerning the charts and graphs given that Vapor is MIA.

    There have been monumental changes and addition since the last update, and I think at a bare minimum we need some basic chart on the first page reflecting these changes.

    BTW Ao1,

    I don't necessarily disagree with your contention that SandForce drives (in a consumer environment) don't really benefit from compression after the OS and applications are installed. However, I look at uncompressible perfomance. I don't just think that I'm always going to get the 555MBs+/500MBs+ a drive is rated for. I believe that a 2281 with toggle nand would still be fast without compression of any kind. Having compression is just the icing on the cake, and one more reason to choose a synchronous equipped model over an async. Just because you might not benefit as much from dedupe doesn't mean there's no benefit at all .

    Which 120GB drive is as fast as a Chronos Deluxe/MaxIOPS/Wildfire/OWC Mecury Pro 120GB even without compression speeding things along?
    Last edited by Christopher; 11-21-2011 at 04:23 PM.

  17. #2742
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Christopher View Post
    However, I look at uncompressible perfomance. I don't just think that I'm always going to get the 555MBs+/500MBs+ a drive is rated for. I believe that a 2281 with toggle nand would still be fast without compression of any kind.
    No, the problem is that the performance of Sandforce SSDs for incompressible data is WORSE than other SSDs that do not do compression. That is what Ao1 was talking about. There is minimal longevity advantage from compression for most user data, and there is a performance hit for reading back the compressed data. For details, look in some of the other threads where Ao1 does a lot of measurements.

  18. #2743
    Xtreme Member
    Join Date
    Jun 2011
    Posts
    145
    There's definitely a performance hit relative to compressible data, but don't the toggle and sync NAND SF drives still outperform most non-SF drives when using incompressible data?

  19. #2744
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by squick3n View Post
    There's definitely a performance hit relative to compressible data, but don't the toggle and sync NAND SF drives still outperform most non-SF drives when using incompressible data?
    Not really. A Crucial m4, Samsung 830, or Intel 510 will beat a Sandforce 22XX SSD in many benchmarks when using incompressible data. Of course, it depends on the exact benchmark, and the Sandforce SSDs are faster in some specific areas. But I'd rather have any of those other three in most cases.

  20. #2745
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Kingston SSDNow 40GB (X25-V)

    493.19TB Host writes
    Reallocated sectors : 05 12
    Available Reserved Space : E8 99

    MD5 OK

    34.16MiB/s on avg (~46 hours)

    --

    Corsair Force 3 120GB

    01 85/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 51 (Wear range delta)
    E6 100 (Life curve status)
    E7 10 (SSD Life left)
    E9 410014 (Raw writes) ->400TiB
    F1 545737 (Host writes) ->533TiB

    MD5 OK

    106.34MiB/s on avg (~46 hours)

    power on hours : 1578

    B1 is down again from 52 to 51.

    --

    We do need to find someone to do the charts, I've had no response at all.
    -
    Hardware:

  21. #2746
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Quote Originally Posted by sergiu View Post
    Do you still have the original file? if yes, then could you do a bit to bit compare? I would be interested in how many mismatches are and if there are inside a single page or spread across multiple pages
    I still have the original file.

    Quote Originally Posted by johnw View Post
    Also it would be good if he can clarify whether he was able to compute the MD5 checksum at all, or whether the failure was due to part of the file being unreadable.
    Yes I was able to compute the checksum. It took 2+ hours to finish and the checksum did not match the original. However, drive is still accessible and the MD5 test ASU was performing every 50 loops during testing does pass...



    I'm was also able to run an ATTObenchmark on it this morning...



    I believe that there are areas that just are not writable and/or readable anymore. I'm open on ideas of what to do next.
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  22. #2747
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    For the hell of it I ran W7 CHKDSK...

    Code:
    Chkdsk was executed in read/write mode.  
    
    Checking file system on X:
    Volume label is M225_64_NS.
    
    CHKDSK is verifying files (stage 1 of 5)...
    Cleaning up instance tags for file 0x2a.
      9728 file records processed.
      File verification completed.
      3 large file records processed.
      0 bad file records processed.
      0 EA records processed.
      0 reparse records processed.
    
    CHKDSK is verifying indexes (stage 2 of 5)...
      9764 index entries processed.
      Index verification completed.
    
    
    CHKDSK is verifying security descriptors (stage 3 of 5)...
      9728 file SDs/SIDs processed
    Cleaning up 18 unused index entries from index $SII of file 0x9.
    Cleaning up 18 unused index entries from index $SDH of file 0x9.
    Cleaning up 18 unused security descriptors.
    Security descriptor verification completed.
      19 data files processed.
    CHKDSK is verifying Usn Journal...
      66195040 USN bytes processed.
    Usn Journal verification completed.
    CHKDSK is verifying file data (stage 4 of 5)...
    Read failure with status 0xc00000b5 at offset 0x32aea1000 for 0x10000 bytes.
    Read failure with status 0xc00000b5 at offset 0x32aeb0000 for 0x1000 bytes.
    Read failure with status 0xc00000b5 at offset 0x32aeb1000 for 0x10000 bytes.
    Read failure with status 0xc00000b5 at offset 0x32aeb1000 for 0x1000 bytes.
    Read failure with status 0xc00000b5 at offset 0x5cf582000 for 0x10000 bytes.
    Read failure with status 0xc00000b5 at offset 0x5cf58c000 for 0x1000 bytes.
    Read failure with status 0xc00000b5 at offset 0x5cf58d000 for 0x10000 bytes.
    Read failure with status 0xc00000b5 at offset 0x5cf58d000 for 0x1000 bytes.
    Read failure with status 0xc00000b5 at offset 0x8d2dd0000 for 0x10000 bytes.
    Read failure with status 0xc00000b5 at offset 0x8d2dd0000 for 0x1000 bytes.
    Read failure with status 0xc00000b5 at offset 0x8d2dd1000 for 0x10000 bytes.
    Read failure with status 0xc00000b5 at offset 0x8d2dd1000 for 0x1000 bytes.
    Windows replaced bad clusters in file 55
    of name \Static\Static.zip.
      9712 files processed.
    File data verification completed.
    CHKDSK is verifying free space (stage 5 of 5)...
      7014172 free clusters processed.
    Free space verification is complete.
    Adding 6 bad clusters to the Bad Clusters File.
    Correcting errors in the Volume Bitmap.
    Windows has made corrections to the file system.
    
      62517247 KB total disk space.
      34317456 KB in 29 files.
           464 KB in 20 indexes.
        142611 KB in use by the system.
         65536 KB occupied by the log file.
      28056692 KB available on disk.
    
          4096 bytes in each allocation unit.
      15629311 total allocation units on disk.
       7014173 allocation units available on disk.
    EDIT1: C5 "Read Failure Block Count" (uncorrectable bit errors) went from 1 to 3 as a result of CHKDSK...



    EDIT2: Ran 7zip extraction test on my static.zip file and only 3 files threw CRC fails due to "broken file" out of 4522 files
    Last edited by bluestang; 11-22-2011 at 08:27 AM. Reason: added more info
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  23. #2748
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Perhaps D flashing would help, but I think its a long shot. Might as well.

  24. #2749
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Well, we had a power outage here for about 45-50 minutes. My UPS only lasted 2 minutes, battery crapped out, better get one one order!

    Anything else gonna go wrong...oh yeah wait! When power came back on, only 2 of the 3 servers here fired back up after I shut them down since it looked like power would be out for hours. Damn power supply and memory was bad.

    So, my endurance/work PC was down without power for over an hour....and M225 is still here! Didn't suffer from power off state like the others
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  25. #2750
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by bluestang View Post
    Well, we had a power outage here for about 45-50 minutes. My UPS only lasted 2 minutes, battery crapped out, better get one one order!

    Anything else gonna go wrong...oh yeah wait! When power came back on, only 2 of the 3 servers here fired back up after I shut them down since it looked like power would be out for hours. Damn power supply and memory was bad.

    So, my endurance/work PC was down without power for over an hour....and M225 is still here! Didn't suffer from power off state like the others
    Bravo


    EDIT
    Incidentally, I got my VertexLE in the mail today. The few benches I've run on it seem pretty good, but for some reason the SMART attributes that track reads/host/raw writes don't seem to be working. Not sure if that's normal or not, but overall it's pretty nice. It might be nice to run a SF1000 without Hynix nand too in the test.



    Last edited by Christopher; 11-22-2011 at 11:09 AM.

Page 110 of 220 FirstFirst ... 1060100107108109110111112113120160210 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •