Page 26 of 220 FirstFirst ... 16232425262728293676126 ... LastLast
Results 626 to 650 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #626
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I was going to add and it might finish quicker as well

    The Kingston SSDNow V+100 would be a good one to test. (~200MB/s sequential writes)

    Here is part of a summary from Anandtech

    "The second issue is the overly aggressive garbage collection. Sequential performance on the V+100 just doesn't change regardless of how much fragmentation you throw at the drive. The drive is quick to clean and keeps performance high as long as it has the free space to do so. This is great for delivering consistent performance, however it doesn't come for free. I am curious to see how the aggressive garbage collection impacts drive lifespan."

  2. #627
    Xtreme Enthusiast
    Join Date
    Jun 2011
    Location
    Norway
    Posts
    609
    Quote Originally Posted by Vapor View Post
    I say just default values like what johnw is doing

    12GB for the free space, 0-fill, allow dedup.
    I'll do that, and keep running totals just to keep it simple.
    I'll fire up mine when you are ready tomorrow.
    1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
    2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
    3: Asus U31JG - X25-M G2 160GB

  3. #628
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    The Kingston SSDNow V+100 would be a good one to test. (~200MB/s sequential writes)
    I forgot about that one! It can match the Samsung 470 in sequential write speed at 64GB capacity. I wonder if it has more interesting SMART attributes than the Samsung.

  4. #629
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by johnw View Post
    There are two more data points I'd be interested in seeing: the compression your Sandforce drive achieves on your C: and D: not-compressed archive files.

    I guess you can measure it by just observing the SMART values on the drive, then copy one file to the drive, then look at the SMART values again to find the compression (assuming that attribute for actual flash writes is accurate). Maybe you have to delete the file and re-copy it several times to get an accurate measurement?
    Also, if you measure the write speed of your C: and D: drive files when copied to the SF SSD, and also measure the write speed of a completely randomized data file (for example, encrypt your C: or D: drive file), and take the ratio of the write speed of the random file to the write speed of the C: or D: drive file, that should give you an independent estimate of the compression ratio achieved by the SF controller. It would be interesting to compare the compression ratio estimated with that method from the compression ratio computed from the SMART values of the drive. Just to be sure SF is not pulling a fast one!

  5. #630
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by johnw View Post
    There are two more data points I'd be interested in seeing: the compression your Sandforce drive achieves on your C: and D: not-compressed archive files.

    I guess you can measure it by just observing the SMART values on the drive, then copy one file to the drive, then look at the SMART values again to find the compression (assuming that attribute for actual flash writes is accurate). Maybe you have to delete the file and re-copy it several times to get an accurate measurement?
    I'd like to know how the C:, D: and 67% No Dedup fare with the SF compression as well....but with just 64GB resolution from SMART, I'd probably have to write 2-4TB of each of them (without writing anything else to the drive) to get any meaningful numbers. At just 44.9GB, 23.8GB, and 8.14GB respectively, that's a ton of repetition

    If there's a way to do this without copying, pasting, and deleting 100s of times, I'm all ears.

  6. #631
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I've implemented the option to make a pause after each loop, configurable from no pause up to 30 seconds.

    Should I set the default option at 15 seconds just to give the controller some time to breathe (having deleted all the files created during the loop) or should I set it to 0 as in no pause.
    (the pause would give the SF controller some time to recover)

    All in all what is 10-15 seconds per loop, well, it's up to you , we'd have to agree on this, don't think there are any implications by introducing this option.
    -
    Hardware:

  7. #632
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    I'd like to know how the C:, D: and 67% No Dedup fare with the SF compression as well....but with just 64GB resolution from SMART, I'd probably have to write 2-4TB of each of them (without writing anything else to the drive) to get any meaningful numbers. At just 44.9GB, 23.8GB, and 8.14GB respectively, that's a ton of repetition

    If there's a way to do this without copying, pasting, and deleting 100s of times, I'm all ears.
    Hmmm, I thought Ao1's drive was reporting in GB (or GiB?) written. If yours only reports in 64GB, then I see the problem.

    You could try the write speed ratio I mentioned, just take the ratio of write speed for a totally random file to the write speed of one of your real-data files. This would not be the most trustworthy estimate, but it would be interesting.

  8. #633
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    For the graphs, we should be careful to get the bytes written correct. Anvil's app actually reports TiB written, even though it is incorrectly labeled as "TB". I'm not sure what the correct units are for the numbers One-Hertz and Anvil have been posting in their updates. In my updates, I am reporting TiB written, taken from Anvil's app.

  9. #634
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post
    I've implemented the option to make a pause after each loop, configurable from no pause up to 30 seconds.

    Should I set the default option at 15 seconds just to give the controller some time to breathe (having deleted all the files created during the loop) or should I set it to 0 as in no pause.
    (the pause would give the SF controller some time to recover)

    All in all what is 10-15 seconds per loop, well, it's up to you , we'd have to agree on this, don't think there are any implications by introducing this option.
    That would only give time for the TRIM operation to clear before the next loop started again. (At least the app would not hang though).

  10. #635
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by johnw View Post
    For the graphs, we should be careful to get the bytes written correct. Anvil's app actually reports TiB written, even though it is incorrectly labeled as "TB". I'm not sure what the correct units are for the numbers One-Hertz and Anvil have been posting in their updates. In my updates, I am reporting TiB written, taken from Anvil's app.
    I am reporting in Terabytes as per SMART data.
    Last edited by One_Hertz; 06-30-2011 at 12:42 PM.

  11. #636
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by johnw View Post
    For the graphs, we should be careful to get the bytes written correct. Anvil's app actually reports TiB written, even though it is incorrectly labeled as "TB". I'm not sure what the correct units are for the numbers One-Hertz and Anvil have been posting in their updates. In my updates, I am reporting TiB written, taken from Anvil's app.
    I think all applications involved use TiB (1024*GiB) but label it TB.

    Do anything other than product labels actually use/acknowledge TB as 1000*GB?

  12. #637
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    I think all applications involved use TiB (1024*GiB) but label it TB.

    Do anything other than product labels actually use/acknowledge TB as 1000*GB?
    As One_Hertz said, the SMART data is apparently in TB. And outside of Windows, I have seen most programs use the units correctly. Certainly most linux programs don't have the Windows bug of displaying incorrect units. Many of the linux command line utilities allow a choice of which units to display.

    GParted displays in GiB and TiB. Palimpsest (linux disk management util) displays in TB and bytes, so you can see it is using the correct unit labels. GSmartControl displays in TB, TiB, and Bytes.

    The difference between TB and TiB is significant and worth paying attention to, IMO. It is about a 10% difference, so I don't want to get it wrong...
    Last edited by johnw; 06-30-2011 at 12:55 PM.

  13. #638
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Okay, here's what I have for charts...I'll probably update them once a day. With permission from Anvil, I'd like to be able to post them in the OP as well. I would like to include one more subset of charts, but I'd need to know everyone's typical write speeds.

    Raw data graphs

    Writes vs. Wear:
    Click image for larger version. 

Name:	Jun30Host.png 
Views:	921 
Size:	47.1 KB 
ID:	116919

    MWI Exhaustion:
    Click image for larger version. 

Name:	Jun30MWIE.png 
Views:	917 
Size:	20.2 KB 
ID:	116921

    Host Writes So Far:
    Click image for larger version. 

Name:	Jun30HostBar.png 
Views:	912 
Size:	17.6 KB 
ID:	116922
    (bars with a border = testing stopped/completed)


    Normalized data graphs

    Writes vs. Wear:
    Click image for larger version. 

Name:	Jun30NormWear.png 
Views:	930 
Size:	46.5 KB 
ID:	116925

    MWI Exhaustion:
    Click image for larger version. 

Name:	Jun30MWIEnorm.png 
Views:	1156 
Size:	20.0 KB 
ID:	116924


    I'd like to note that the Vertex 2 in these charts will probably go away when the 60GB version(s) gets going now that we know SMART 233.

  14. #639
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by johnw View Post
    As One_Hertz said, the SMART data is apparently in TB. And outside of Windows, I have seen most programs use the units correctly. Certainly most linux programs don't have the Windows bug of displaying incorrect units. Many of the linux command line utilities allow a choice of which units to display.

    GParted displays in GiB and TiB. Palimpsest (linux disk management util) displays in TB and bytes, so you can see it is using the correct unit labels. GSmartControl displays in TB, TiB, and Bytes.

    The difference between TB and TiB is significant and worth paying attention to, IMO. It is about a 10% difference, so I don't want to get it wrong...
    Didn't know SMART was using and displaying as TB, figured it was playing into the Windows scheme (which I don't mind...TB as 1000*GB has no use, IMO). And yeah, the difference is pretty big in the TB/TiB range...this is definitely something we need clarification on, I've been under the impression that all utilities report TiB, regardless of what they call it.

    Easy enough to fix as long as I know what, specifically, is broken.

    EDIT: I have all the charts in my spreadsheet fixed to TiB assuming that SMART reads out TB in all drives so far (so every drive is adjusted except the Samsung 470). Is it a correct assumption that SMART = TB universally? I don't want to keep posting updated charts that are wrong
    Last edited by Vapor; 06-30-2011 at 01:08 PM. Reason: edit

  15. #640
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    EDIT: I have all the charts in my spreadsheet fixed to TiB assuming that SMART reads out TB in all drives so far (so every drive is adjusted except the Samsung 470). Is it a correct assumption that SMART = TB universally? I don't want to keep posting updated charts that are wrong
    Hard to say, universally. But anyone who has a drive running Anvil's app that has host writes as a SMART attribute (which Samsung 470 does NOT), can easily check what the SMART attribute is reporting, by comparing GiB written (labeled "GB written") in Anvil's app with whatever the SMART attribute is reporting at two different times, and checking to see if the difference in each number tracks as expected.

  16. #641
    Xtreme Enthusiast
    Join Date
    Jun 2011
    Location
    Norway
    Posts
    609
    On the M4 the 0xAD value must be multiplied with 64GB (got this from Anvil) Easiest way to keep control is to enable "Keep running totals" in Anvils app.

    Click image for larger version. 

Name:	smartinfo m4.jpg 
Views:	929 
Size:	133.3 KB 
ID:	116926
    1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
    2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
    3: Asus U31JG - X25-M G2 160GB

  17. #642
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Running Anvil's app on my V2 50GB 2R0 array now....will know how a V2 scales within a couple of hours (even with the 64GB resolution).

    And 0-fill with compression is pretty nifty in a case like this, hah...abnormally high speeds with no real wear on the NAND itself.

  18. #643
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Vapor View Post
    Okay, here's what I have for charts...I'll probably update them once a day. With permission from Anvil, I'd like to be able to post them in the OP as well. I would like to include one more subset of charts, but I'd need to know everyone's typical write speeds.
    You have my permission

    Great charts!

    On the TB vs TiB matter, I've been reporting as reported by CDI and it looks like its TB and not TiB.
    I see the issue with the drives that aren't reporting host writes so what do we do?
    It's an easy fix, all data is written in bytes so it's just a conversion.

    I'll probably end up making it an option, displaying TB or TiB, so which one should be the default one?
    -
    Hardware:

  19. #644
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    For a Vertex 2, I'm pulling ~63.2GiB of writes (per Anvil's app) per 64GB jump in SMART. I was expecting 64GiB or 59.6GiB....not something between. Will run it more.

    EDIT: Shifting more to ~63.4GiB per 64GB now. Putting it out of range of even the non-existent 1000*MiB = GB (62.5GiB = 64 of those). Must be some 1% transfer overhead or something getting counted? But overall, it looks like 64GB SMART = 64GiB with Vertex 2. Now to test my Intel X-25M 80GB G1.
    Last edited by Vapor; 06-30-2011 at 03:01 PM. Reason: edit

  20. #645
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    I'll probably end up making it an option, displaying TB or TiB, so which one should be the default one?
    I vote for TB for default (I'm assuming you mean the computation, not just the label display).

    There is a good reason why all scientific and engineering disciplines (except computer science) use base-10 unit prefixes. They are easy to work with in a base-10 number system! If I need to add up 111TB + 222GB + 333MB + 444KB, it is easy: 111.222333444 TB. But try 111TiB + 222GiB + 333MiB + 444KiB. I need a calculator and a lot of keystrokes to get 111.217114862 TiB.

    If you change it, I should still be able to enter an updated number, right? I can just stop the app, convert from GiB to GB, enter the new number, and restart the test?
    Last edited by johnw; 06-30-2011 at 03:02 PM.

  21. #646
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    For a Vertex 2, I'm pulling ~63.2GiB of writes (per Anvil's app) per 64GB jump in SMART. I was expecting 64GiB or 59.6GiB....not something between. Will run it more.
    Weird. How many total GiB has Anvil's app written since you started checking the SMART attribute?

  22. #647
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Was going to do a multi-TiB test, but found it easy enough to stop/start the Anvil app and refresh SMART manually every ~.67GB (2 seconds) when I expected it to turn over. First differential was 63.3GiB, second was 63.2GiB, third and fourth were 63.4GiB per 64GB jump.

    I vote for MiB, GiB, and TiB calculation because that is what's used in Windows and seemingly everything except product labels. And MB/GB/TB can be interpreted as either 1000^x or 1024^x (hence all the confusion and puzzle solving in the past few posts) whereas MiB/GiB/TiB have one interpretation. Yes, 1000^x MB/GB/TB is easier to add, but I'm not sure how often that issue comes up or will come up with this testing.

    EDIT: Happened to get one of my drive's SMART 233s value to be within 2GiB of turnover (the two drives had identical values and are only identical for a 2GiB window)....going to run brief compression/write amplification tests now. May not be pertinent since my daily driver V2s are on an old firmware and in RAID, but who knows
    Last edited by Vapor; 06-30-2011 at 03:25 PM. Reason: edit

  23. #648
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Nothing too monumental, but on my setup (2R0 Vertex 2 50GB with FW1.10, 32KB stripe), Anvil's "67% Compression with No Dedup" setting has a write amplification of ~1.185x. 192GB of 233 SMART took only 162GiB +/- .7GiB. Will retest on a single drive with newer firmware later unless somebody beats me to it.

    I have no way to write, in bulk, my C: and D: stores to my array, so that testing will have to wait as well.

  24. #649
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    Was going to do a multi-TiB test, but found it easy enough to stop/start the Anvil app and refresh SMART manually every ~.67GB (2 seconds) when I expected it to turn over. First differential was 63.3GiB, second was 63.2GiB, third and fourth were 63.4GiB per 64GB jump.
    So that is a total of A = 271.9788 GB written by Anvil's app in four trials, in which (I think you mean) the SMART attribute increased by 256.

    271.9788 / 256 = 1.0624172

    (1024/1000)^1 = 1.024000
    (1024/1000)^2 = 1.048576
    (1024/1000)^3 = 1.073742
    (1024/1000)^4 = 1.099512

    So the units of the SMART attribute are hard to explain. They are closest to GiB, but still about 1% off from GiB.

    I guess there is a bug in either the Sandforce firmware or Anvil's app.

  25. #650
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    Nothing too monumental, but on my setup (2R0 Vertex 2 50GB with FW1.10, 32KB stripe), Anvil's "67% Compression with No Dedup" setting has a write amplification of ~1.185x.
    Interesting. Sandforce could not compress it at all. Makes me wonder about what Anand's SSDs had written to them with his repeated claim of 0.6 WA.

Page 26 of 220 FirstFirst ... 16232425262728293676126 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •