Page 18 of 24 FirstFirst ... 815161718192021 ... LastLast
Results 426 to 450 of 598

Thread: Sandforce Life Time Throttling

  1. #426
    Xtreme Member
    Join Date
    Aug 2006
    Posts
    187
    Quote Originally Posted by johnw View Post
    That is because warranty throttling is not a feature, but a limitation. It does not benefit anyone except possibly OCZ (no need to explain to an RMA customer that the warranty does not apply because they wrote 1PB in 3 months).

    And customers should avoid drives that are throttled. So basically what you are saying is that OCZ is selling an inferior product, but hiding that fact from potential customers.
    No I'm not saying OCZ is selling an inferior product at all. What I said is that to the casual (read non-informed or computer illiterate) user, if they are looking to buy an SSD because they read they are very fast and started browsing the net to find and buy one, if they found an SSD which mentioned "throttling" in the specs they wouldn't buy it but would avoid them like the plague and buy another brand.

    I use an OCZ Vertex 2 in my system and whilst it mightn't be the fastest drive on earth it has been rock solid so far and definitely not an inferior product. It might be outdated with SATA 3 drives now available but not inferior.
    Intel S1155 Core i7 2600K Quad Core CPU
    Gigabyte GA-Z68X-UD3R-B3 Socket 1155
    DDR3 16GB (4x4G) G.Skill Ripjaws 1600MHz RAM Kit
    128GB Crucial M4 2.5" SATA 3 Solid State Drive (SSD)
    2TB Western Digital BLACK edition 64M SATA HDD
    1TB Western Digital Green 64M SATA HDD
    NVIDIA GTX560 1GB Gigabyte OC PCIe Video Card
    23.6" BenQ XL2410T 3D LED Monitor
    CoolerMaster RC-922M-KKN1 HAF Mid ATX Case Black
    Thermaltake 775 Watt Toughpower XT ATX PSU
    LG BH10LS30 Blu-Ray Writer
    Corsair Hydro H70 High Performance Liquid Cooling System

  2. #427
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by therat View Post
    No I'm not saying OCZ is selling an inferior product at all. What I said is that to the casual (read non-informed or computer illiterate) user, if they are looking to buy an SSD because they read they are very fast and started browsing the net to find and buy one, if they found an SSD which mentioned "throttling" in the specs they wouldn't buy it but would avoid them like the plague and buy another brand.
    So, since you are saying that a throttled SSD is undesirable to many people -- in other words, many people would prefer an unthrottled SSD -- then you are indeed saying that the throttled OCZ SSDs are inferior to unthrottled SSDs, and OCZ is hiding the fact that their SSDs are inferior by failing to document the throttling in the specifications.

  3. #428
    Xtreme Member
    Join Date
    Aug 2006
    Posts
    187
    As we have seen in the tests in this thread, a throttled SSD is not as good as an unthrottled SSD however no new SSD comes in a throttled state and the reality is that rarely will any user ever reach the throttled state. in the rare cases that an SSD does reach a throttled state it would be simpler for the manufacturer to RMA the SSD and replace it.

    The throttling does not hamper speed (just read the reviews of OCZ, Corsair and OWC SF based drives) under normal use and i repeat that to say a certain SSD is inferior because it has throttling built in when in fact that throttling will rarely if ever be reached by a user, is just plain wrong.

    You obviously have a problem with OCZ made SSDs. I don't have any such loyalties to SSD brands and will buy whichever brand suits my needs at the time at the best price.
    Intel S1155 Core i7 2600K Quad Core CPU
    Gigabyte GA-Z68X-UD3R-B3 Socket 1155
    DDR3 16GB (4x4G) G.Skill Ripjaws 1600MHz RAM Kit
    128GB Crucial M4 2.5" SATA 3 Solid State Drive (SSD)
    2TB Western Digital BLACK edition 64M SATA HDD
    1TB Western Digital Green 64M SATA HDD
    NVIDIA GTX560 1GB Gigabyte OC PCIe Video Card
    23.6" BenQ XL2410T 3D LED Monitor
    CoolerMaster RC-922M-KKN1 HAF Mid ATX Case Black
    Thermaltake 775 Watt Toughpower XT ATX PSU
    LG BH10LS30 Blu-Ray Writer
    Corsair Hydro H70 High Performance Liquid Cooling System

  4. #429
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    The objective of the thread was to understand the parameters under which DuraWrite operated. There are a number of vendor configurable options that can make a significant difference.

    • The credit duration is a variable factor.
    • The P/E cycle capability is a variable factor
    • Work load is a variable factor
    • The life curve setting is a variable factor
    • EDIT: How long you leave you SSD powered on is a variable factor

    DuraWrite controls writes via two mechanisms:

    • Burst write throttling
    • Sustained write throttling

    So far only sustained write throttling has been investigated.

    The credit duration distorts the short term perspective that users/ reviews see when the drive is new. A better perspective can be viewed over the long term. This is the best educated guess I can come up with, based on what I have observed:

    • 0.60TiB per day with 1 year throttle/ 25.6GiB per power on hour/ 7.28MiB/s
    • 0.2TiB per day with 3 year throttle/ 8.53GiB per power on hour/ 2.42MiB/s
    • 0.12TiB per day with 5 year throttle/ 5.12GiB per power on hour/ 1.45MiB/s

    My SSD's are around 0.8GiB per power on hour. Throttling is not going to be an issue for me based on the amount I write.
    If I was using my SSD for write intensive tasks (which I don't) it might be a different story.

    My only objective is to understand how DuraWrite works. This hopefully helps people to make educated choices based on their requirements.
    Last edited by Ao1; 08-02-2011 at 02:22 AM.

  5. #430
    Xtreme Member
    Join Date
    Aug 2006
    Posts
    187
    And it's been very informative and greatly appreciated by all. I have a couple of questions tho:

    1. Is DuraWrite implemented the same way by all SSD manufacturers eg OCZ, Corsair, OWC etc

    2. Do you intend to test DuraWrite on a SATA 2 and a SATA 3 SSD from another SandForce manufacturer eg OWC

    Don't know why but OWC seems to be the SF manufacturer that i can not find bad reviews from, be it from testers or end users. It appears they sell to PC and not only MAC but regardless maybe they sell so few in comparison to OCZ that there is no negative stuff out there about them.

    Your work helps all users know more about SSDs and that can only be good.

    Cheers
    Intel S1155 Core i7 2600K Quad Core CPU
    Gigabyte GA-Z68X-UD3R-B3 Socket 1155
    DDR3 16GB (4x4G) G.Skill Ripjaws 1600MHz RAM Kit
    128GB Crucial M4 2.5" SATA 3 Solid State Drive (SSD)
    2TB Western Digital BLACK edition 64M SATA HDD
    1TB Western Digital Green 64M SATA HDD
    NVIDIA GTX560 1GB Gigabyte OC PCIe Video Card
    23.6" BenQ XL2410T 3D LED Monitor
    CoolerMaster RC-922M-KKN1 HAF Mid ATX Case Black
    Thermaltake 775 Watt Toughpower XT ATX PSU
    LG BH10LS30 Blu-Ray Writer
    Corsair Hydro H70 High Performance Liquid Cooling System

  6. #431
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I have a number of tests I would like to run, which would include testing other SF vendors, different workloads and larger capacity drives.

    AFAIK all SF vendors implement sustained LTT. Mushkin claim to be the only vendor that does not implement burst throttling.

  7. #432
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Quote Originally Posted by Anvil View Post
    @bluestang

    E9 is ~2x Host writes, in normal circumstances it should be 0.6-0.8 * F1, so, about 384-512 would be normal.

    There's something wrong here, could you try reapplying the 1.33 firmware from OCZ?
    Since it's my main OS drive on my Work PC, probably not going to try reapplying the FW.

    Quote Originally Posted by sergiu View Post
    @bluestang
    Just noticed that in between your SMART post E9 changed with 384 while F1 changed with 256 which would give a WA of 1.5
    Hmmmm. 1st SMAT post EA/F1=384 and E9=832, so 384x2=768 +64=832. 2nd SMART post EA/F1=640 and E9=1216, so 640x2=1280 -64=1216. Don't know what it means. These V2's record in 64GB chunks don't they?

    Quote Originally Posted by Ao1 View Post
    Hey bluestang,

    Not sure if this works the same with XP, but if you hit ctrl/ alt/ delete > Start Task Manager > Performance > Resource Monitor >Disk you will see Disk Activity and Processes with Disk Activity.

    Maybe this would help to see if there is any write activity that might be unusual.
    No Disk Resource Monitor is XP.
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  8. #433
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by therat View Post
    The throttling does not hamper speed (just read the reviews of OCZ, Corsair and OWC SF based drives) under normal use and i repeat that to say a certain SSD is inferior because it has throttling built in when in fact that throttling will rarely if ever be reached by a user, is just plain wrong.
    No, that statement is wrong. And it is terrible logic. If A < B, then A is inferior to B. You might instead claim that A is only slightly inferior to B, but you cannot claim that A is not inferior to B, unless you have abandoned logic.

  9. #434
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by bluestang View Post
    Since it's my main OS drive on my Work PC, probably not going to try reapplying the FW.
    I see your point, there is a risk in reapplying and so you'd have to create a backup, could be that it makes no difference at all.
    -
    Hardware:

  10. #435
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by bluestang View Post
    Hmmmm. 1st SMAT post EA/F1=384 and E9=832, so 384x2=768 +64=832. 2nd SMART post EA/F1=640 and E9=1216, so 640x2=1280 -64=1216. Don't know what it means. These V2's record in 64GB chunks don't they?
    Normally from what was observed E9 records real flash writes while EA/F1 records host writes. The increment is indeed in 64GiB. Now I believe the relation with the doubling is more a coincidence of the moment. The numbers are strange because judging from first reading, the WA was around 2.1 while in between decreased to 1.5. It is strange because normal data is compressible up to one point so WA is usually below 1. Could you check if you have ntfs last access update enabled or not: http://www.pctools.com/guides/registry/detail/50/ ? This is a source of small writes. Not sure how would translate in real world data, but I guess it would translate into higher WA because the metadata update would probably be as small as one sector while real data written would be at least one flash page. Also, could you do some zero fill testing? I'm interested to see if your drive has the same compression rate as Vapor's drive.

  11. #436
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Asked the person who modified and flashed the firmware on my SF-1200, his opinion is my drive has a counting bug and that it's actually writing half what it says it is.

    Makes sense: reported WA of everything is exactly double and write performance is where it should be (not half of where it should be)

  12. #437
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    If WA is double, then we should see some values between 2.2 and 2.5 for incompressible data. I guess you're already testing something like this

  13. #438
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Expanded the compression testing a bit, SF-1200 is very far behind now but the SF-2200 is done

    SF-1200 chart will be included again after it's at least 50% is complete (tomorrow morning). So far all results are inline (after correcting for 2x E9) with the SF-2200 numbers though

    Click image for larger version. 

Name:	sf2200comp.png 
Views:	241 
Size:	29.7 KB 
ID:	118584

    Two ways of looking at the data: 1) overall write amplification (WA), just a simple NAND writes divided by host writes and 2) compressed size, which is intended to show how well the controller can compress the data (independent of how well it writes the data). Compressed size takes the assumption that the compression algorithm can package incompressible data to 101% of its original size and that write amplification as a ratio (1.097/1.01) is constant across all sequential writes. Compressed size then takes the overall WA numbers from each setting and factors out the write amplification (WA * 1.01 / 1.097).

    My observations: it seems the deduplication algorithm for the SF-22xx is pretty weak, at least on a macro level. Two pieces of evidence: 1) there was no observable difference between the standard "allow deduplication" and ND (no dedupilication) settings and 2) 0-fill writes only compacted down to 15%. Any decent wide-view deduplication ability should have compressed that down magnitudes smaller. Also, if you just look at the compressed size numbers, SF controller beats NTFS compression at every setting except 0-fill and incompressible.

    Seems 46% setting is close in compressibility to my C Volume (Windows and apps, 44.9GiB) and 67% setting is close in compressibility to my D Volume (My Documents: Office docs, JPEGs, PNGs, and DNGs, 23.8GiB).

    I also did a small test (~95GiB) of writing my D Volume multiple times to the SSD without deletion (rather than the standard protocol of write, delete, write, delete) and there was no observable difference in WA/compression, further indicating a very weak deduplication ability.

  14. #439
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Nice data, especially the Windows/apps numbers and the documents numbers. It will be interesting to see how much those two compress with the older Sandforce controller.

    For the 22XX, it looks like around 90% compression ratio is what to expect for typical data files, except for people who keep re-installing Windows and applications over and over, who should see better than 90% compression ratio.

  15. #440
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by johnw View Post
    It will be interesting to see how much those two compress with the older Sandforce controller.
    I dread doing that...manually copying then deleting the storage file 100+ times for each file

  16. #441
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    I dread doing that...manually copying then deleting the storage file 100+ times for each file
    If only there was an easy way to automate that.

  17. #442
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    New chart, hopefully it's readable once you know the basics to deciphering it.

    Solid lines = compression setting in Anvil's app vs. compressed size
    Dashed lines = C Volume compressed size with the various algorithms/controllers
    Dotted lines = D Volume compressed size with the various algorithms/controllers

    Click image for larger version. 

Name:	sf2200compchart.png 
Views:	471 
Size:	50.6 KB 
ID:	118593

    Basically shows that SF-2200 is already better than NTFS (at least in the areas where it matters most) but still lagging behind even low resource LZMA (7zip fastest). In the coming years, I wouldn't be surprised to see those curves/lines approached (as I've said a few times, dedicated hardware can be magnitudes more efficient than a CPU). Upgrading to 7zip Fastest-level compression, honestly wouldn't be worth a huge amount performance, but if the opportunity cost is right, it's seems reachable.

    Then there's a tricky situation that we're a very long way away from (hopefully). Although 7zip/LZMA Normal is pretty slow on our CPUs right now, I wouldn't be surprised if that kind of compression performance at interface speeds is reachable within 5-10 years from a <1 watt chip. When it is possible, it could also break a benchmark (essentially). With just modest resources, LZMA can compress generated data much more than real data. We take for granted that the 'incompressible' data generated now is incompressible to the current SF controllers, but the reality is that it's very compressible to even moderate-resource LZMA. Just how 'exploitable' it is can be seen with the difference between 7zip/LZMA Normal and RAR Best. 7zip/LZMA Normal is ~8x as good as RAR Best with generated data, but only a few percentage better with real data. The 0-fill benchmark performance of SF hardware is pretty bogus, but what if all generated data benchmarks behaved like 0-fill does now? [/alarmism]

  18. #443
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I found a quote from SandForce that skims over compression and talks about the importance of RAISE.

    Interestingly it appears that RAISE has been DISABLED on most of the SF2xxx client based offerings.

    "In the recent article by David Rosenthal he mentions a conversation with Kirk McKusik and the ZFS team at Sun Microsystems (Oracle). That conversation explains why it is critical that meta data not be lost or corrupted. He goes on to say that “If the stored metadata gets corrupted, the corruption will apply to all copies, so recovery is impossible.”

    SandForce employs a feature called DuraWrite which enables flash memory to last longer through innovative patent pending techniques. Although SandForce has not disclosed the specific operation of DuraWrite and its 100% lossless write reduction techniques, the concept of deduplication, compression, and data differencing is certainly related. Through all the years of development and OEM testing with our SSD manufacturers and top tier storage users, there has not been a single reported failure of the DuraWrite engine. There is no more likelihood of DuraWrite loosing data than if it was not present.

    We completely agree that any loss of metadata is likely to corrupt access to the underlying data. That is why SandForce created RAISE (Redundant Array of Independent Silicon Elements) and includes it on every SSD that uses a SandForce SSD Processor. All storage devices include ECC protection to minimize the potential that a bit can be lost and corrupt data. Not only do SandForce SSD Processors employ ECC protection enabling an UBER (Uncorrectable Bit Error Rate) of greater than 10^-17, if the ECC engine is unable to correct the bit error RAISE will step in to correct a complete failure of an entire sector, page, or block.

    This combination of ECC and RAISE protection provides a resulting UBER of 10^-29 virtually eliminates the probabilities of data corruption. This combined protection is much higher than any other currently shipping SSD or HDD solution we know about. The fact that ZFS stores up to three copies of the metadata and optionally can replicate user data is not an issue. All data stored on a SandForce Driven SSD is viewed critical and protected with the highest level of certainty."


    http://storagemojo.com/2011/06/27/de...of-good-thing/

  19. #444
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    Interestingly it appears that RAISE has been DISABLED on most of the SF2xxx client based offerings.
    What? Why would they do that? Just to save on the extra flash memory?

  20. #445
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Only OCZ can give the answer.

    Here is what SF say on their web site:

    "RAISE - Redundant Array of Independent Silicon Elements - writes data across multiple flash die to enable recovery from a failure in a sector, page or entire block, just like the concept of multi-drive RAID used in disk-based storage, but RAISE only requires a single drive.
    SSDs are built using flash die that are assembled up to 8 die per package. For optimum capacity the SSD can be assembled with up to 16 packages. That puts 128 individual die in one SSD. If the failure rate (unrecoverable read error) of one MLC die is conservatively 1,000 PPM (a failure probability of 0.1%) then using the probability formula for 128 devices the failure rate increases to 12.0% over the life of the SSD.
    Using RAISE technology in a SandForce Driven SSD reduces the probability of a single unrecoverable read error by 100 times to 0.001%. Applying that same formula, the failure rate of the SSD drops from 12.0% to a mere 0.13%, nearly 100 times lower."


    http://www.sandforce.com/index.php?i...rentId=3&top=1
    Last edited by Ao1; 08-03-2011 at 12:39 AM.

  21. #446
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Vapor View Post
    I dread doing that...manually copying then deleting the storage file 100+ times for each file
    It is on my to-do list, can't promise anything yet though.

    It can be done using a standard batch file.

    In it's simplest form one can

    Code:
    xcopy drive:\sourcefolder drive:\destinationfolder /E/V/C/Y
    rmdir drive:\destinationfoler /s/q
    those commands are from the top of my head and should be tested/verified

    then just duplicate to get 100 operations and save to a batch file...
    -
    Hardware:

  22. #447
    the jedi master
    Join Date
    Jun 2002
    Location
    Manchester uk/Sunnyvale CA
    Posts
    3,884
    Quote Originally Posted by johnw View Post
    What? Why would they do that? Just to save on the extra flash memory?
    Most of the time the market dictates what gets released. people want 128GB drives, they don't want 120's etc.

    I like RAISE, I add more OP to drives, im an enthusiast like you guys, thats the difference
    Got a problem with your OCZ product....?
    Have a look over here
    Tony AKA BigToe


    Tuning PC's for speed...Run whats fast, not what you think is fast

  23. #448
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    If RAISE is disabled it is disabled. Adding OP does not change that or in any way fulfil the same function. There are way too many variables with SF based drives and none of them are documented.

  24. #449
    Xtreme Member
    Join Date
    Aug 2006
    Posts
    187
    Quote Originally Posted by Ao1 View Post
    Interestingly it appears that RAISE has been DISABLED on most of the SF2xxx client based offerings.
    So all vendors have disabled RAISE, OCZ, OWC, Corsair etc? Wow a backward step i would have thought.
    Intel S1155 Core i7 2600K Quad Core CPU
    Gigabyte GA-Z68X-UD3R-B3 Socket 1155
    DDR3 16GB (4x4G) G.Skill Ripjaws 1600MHz RAM Kit
    128GB Crucial M4 2.5" SATA 3 Solid State Drive (SSD)
    2TB Western Digital BLACK edition 64M SATA HDD
    1TB Western Digital Green 64M SATA HDD
    NVIDIA GTX560 1GB Gigabyte OC PCIe Video Card
    23.6" BenQ XL2410T 3D LED Monitor
    CoolerMaster RC-922M-KKN1 HAF Mid ATX Case Black
    Thermaltake 775 Watt Toughpower XT ATX PSU
    LG BH10LS30 Blu-Ray Writer
    Corsair Hydro H70 High Performance Liquid Cooling System

  25. #450
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Don't know which vendors do or don't. They will have to answer that as it is not possible to determine if RAISE is activated by looking at the NAND capacity. The larger capacity drives (240GB) in general have RAISE the smaller ones don't.

    Here are a couple of examples:

    SF = 8 channels

    OCZ V3 120GB (WITHOUT RAISE)
    64Gb LU = 69,120Mb / 8.4375GiB
    16 x 8.4375 = 135GiB (Including ECC)
    16 x 8 = 128GiB (Excluding ECC)
    Formatted capacity = 111.79GiB
    Difference between capacity (excluding ECC) and formatted = 16.21GiB /12.7%

    OCZ V3 240GB (WITH RAISE)
    128Gb LU = 138.24Mb / 16.875GiB
    16 x 16.875 = 270GiB (Including ECC)
    16 x 16 = 256GiB (Excluding ECC)
    Formatted capacity = 223.58GiB
    Difference between capacity (excluding ECC) and formatted = 32.42GiB /12.7%

    Omitting RAISE does not reduce the amount of NAND being used (at least as far as I can see).

    So why do it? To devote more OP to help performance at the expense of reliability?

    I don't know the answer.

Page 18 of 24 FirstFirst ... 815161718192021 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •