MMM
Results 1 to 25 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

Hybrid View

  1. #1
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @Vapor
    Superb job on collecting compression data and yes, it's based on 7Zip Fast compression ratio. (could be Fastest, will check)

    Looking at my tests on the SF2 controller it couldn't keep up with the ratio that 7Zip Fast(est) produces, not sure how the SF1 handles vs the SF2.

    For reference, I 7Zipped one of my VM's earlier today (Windows Server 2008 R2, SQL-Server, + some data) and it ended up being ~50% of the original size using 7Zip Fastest.
    Still it took 40minutes to produce that file using an W3520@4GHz on an Adaptec 3805 hosting a 3R5 volume, there is no way that the SF controller is able to achieve that sort of compression (on the fly) as 40GB is written at a rate of ~100MB/s using a 60GB SF1 drive. (based on steady-state)
    I'll do some more tests when I get a few more of the items off my to-do list.
    -
    Hardware:

  2. #2
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by Anvil View Post
    @Vapor
    Superb job on collecting compression data and yes, it's based on 7Zip Fast compression ratio. (could be Fastest, will check)

    Looking at my tests on the SF2 controller it couldn't keep up with the ratio that 7Zip Fast(est) produces, not sure how the SF1 handles vs the SF2.

    For reference, I 7Zipped one of my VM's earlier today (Windows Server 2008 R2, SQL-Server, + some data) and it ended up being ~50% of the original size using 7Zip Fastest.
    Still it took 40minutes to produce that file using an W3520@4GHz on an Adaptec 3805 hosting a 3R5 volume, there is no way that the SF controller is able to achieve that sort of compression (on the fly) as 40GB is written at a rate of ~100MB/s using a 60GB SF1 drive. (based on steady-state)
    I'll do some more tests when I get a few more of the items off my to-do list.
    I have zero doubt hardware designed for compression/dedup could do twice (at least) what our CPUs do with just a 1W power envelope....but that doesn't mean the SF1 and SF2 controllers can do it. It's a safe bet they can't and their compression levels are weaker than the weakest RAR/7zip setting--too bad there's no way of running their compression levels on our CPUs to see what they can do with more precision than the 64GB (or 1GB SF2) resolution the SMART values give.

    Almost done with the charts of all the drives so far (minus the V2-40GB...not sure whether or not to include that as testing essentially errored-out). Including a new chart with normalized writes vs. wear, which is kind of necessary considering drives of different sizes are getting entered into testing; writes will be normalized to the amount of NAND on the drive, not the advertised size.

    Working on bar charts with writes from 100-to-0 wear as well as total writes done so far. 100-to-0 wear will be extrapolated until MWI = 0 and then frozen...so when MWI > 0, total writes will be less than 100-to-0 but after MWI hits 0, total writes will be greater than 100-to-0. Would "MWI Exhaustion" be a better name for the 100-to-0 bar?

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •