Page 4 of 4 FirstFirst 1234
Results 76 to 96 of 96

Thread: How big is Host Writes on your SSD?

  1. #76
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Ao1 View Post
    Why would they reference a trace from a non 4K aware OS (XP) using a HDD with only 1GB of RAM?

    Nice info though. I see they say 29% of all write commands are sequential.

    Everyone's going to have a different workload. At least this one is documented. Just a shame it was on an antiquated set up.
    The setup doesn't matter in determining these numbers... the data written would be the same on HDD or SSD. The low RAM could increase certain writes however.

  2. #77
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    Ehh, I was wondering why it showed I/O >4KB which is not 4KB aligned.

    Quote Originally Posted by Ao1 View Post
    Why would they reference a trace from a non 4K aware OS (XP) using a HDD with only 1GB of RAM?

    Nice info though. I see they say 29% of all write commands are sequential.

    Everyone's going to have a different workload. At least this one is documented. Just a shame it was on an antiquated set up.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  3. #78
    Xtreme Enthusiast
    Join Date
    Jan 2005
    Posts
    674
    160GB G2 used in my main machine since launch
    all user folders are mapped to RAID0 hard drive array

    ....

  4. #79
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by One_Hertz View Post
    The setup doesn't matter in determining these numbers... the data written would be the same on HDD or SSD. The low RAM could increase certain writes however.
    Agreed but it suggests the PC being monitored was more or less being used as a type writer. It would have been much better if they used a trace from a modern PC/ OS with a wider range of applications. Using an SSD for the trace would also have been more helpful.

    At the end of the day however it is still going to be mostly small xfers, but maybe some larger xfers would have shown up.

  5. #80
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    Quote Originally Posted by One_Hertz View Post
    The setup doesn't matter in determining these numbers... the data written would be the same on HDD or SSD. The low RAM could increase certain writes however.
    No it would not, and considerably. NTFS journaling and MFT access on a larger HDD drive is different than on an SSD.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  6. #81
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by alfaunits View Post
    NTFS journaling and MFT access on a larger HDD drive is different than on an SSD.
    No? Expand?

  7. #82
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    One_Hertz

    ETA for your drive?
    -
    Hardware:

  8. #83
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Anvil View Post
    One_Hertz

    ETA for your drive?
    Today apparently. NCIX is very fast here in Canada.

    I guess we need to come to a consensus on our test parameters so we do the same thing. Are we going to TRIM the drive or not? Some users will use TRIM, others will not due to RAID or due to an OS that doesn't support TRIM.

    The whitepaper I linked recommends using QD 1 for endurance testing as higher QD is not necessarily realistic (as Ao1's testing showed as well).

    For size distributions maybe 50% static data, 25% test file size, 25% empty? What mix of block sizes do you want to use? As for seq/random I am thinking 70% random?

    Are we using IOMeter or are you writing us a utility?
    Last edited by One_Hertz; 05-13-2011 at 09:06 AM.

  9. #84
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    So we have a 320 & an X25-v head to head?

    My vote is for TRIM.

    For comparison here are the JDEC xfer sizes for their proposed SSD Enterprise Endurance Workload.

    That doesn't seem too far off the mark to what I have seen when monitoring, although I do see larger xfers ~ 1MB and above from time to time. Larger xfers are obviously more application centric, whereas those below are more OS related. I'd vote for a bit of a mix.

    512 bytes (0.5k) 4%
    1024 bytes (1k) 1%
    1536 bytes (1.5k) 1%
    2048 bytes (2k) 1%
    2560 bytes (2.5k) 1%
    3072 bytes (3k) 1%
    3584 bytes (3.5k) 1%
    4096 bytes (4k) 67%
    8192 bytes (8k) 10%
    16,384 bytes (16k) 7%
    32,768 bytes (32k) 3%
    65,536 bytes (64k) 3%

    http://www.jedec.org/standards-documents/docs/jesd219

  10. #85
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    This test will take forever. Such a workload will be very very slow at QD1 on these 40GB SSDs. 10MB/s? I guess we'll see.
    Last edited by One_Hertz; 05-13-2011 at 09:43 AM.

  11. #86
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    There are a lot of things to take into consideration but I'm sure we'll find some middle ground.

    We can't really recreate normal operations and so the question is what do we want to find out?
    First of all, should we do the same test or should we take different routes?
    Is there a point in doing the same test?, it would of course tell us what we can expect using 25nm vs 34nm NAND.
    Keep in mind that this is a very small sample size , a different batch of NAND could perform better or worse.

    50% static is too much imho, ~12GB would be sufficient. (about the same size as the W7 OS)

    So, I suggest using:
    -12.5-15 GB of static data
    -A single 2.5GB "static" random datafile for random writes. (TRIM wouldn't work on the random datafile as it wouldn't be deleted/recreated)
    -12.5-15 GB for creating/deleting or copying files
    That would leave 25% of the drive empty.
    No over-provisioning, just the standard op set from factory.

    The most realistic route would be to dynamically create/delete files and I do think we need to make the use of TRIM.

    Then there is the matter of reporting/collecting data, there are several ways to report and publish, either manually or by that utility I'm creating.
    I can put a "screenshot" of the collected metrics on an FTP server and we could just link it to that new thread. A lot of options, could be updated every hour or whatever we decide on.

    I haven't decided what computer to run the test on yet , I've got a couple of options though.

    I suggest we settle for something and run it for a few days just to see what happens and then we make adjustments based on what happens during that "test" period.
    We could be in for a surprise, I don't know what to expect...
    -
    Hardware:

  12. #87
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    I think we should do the same test... if we do a different test and see different results then we won't know whether it is due to NAND differences or the different test.

    What software do you propose we use for this? I've got an old crappy P4 rig I can throw W7 onto and run 24/7 for this test.

  13. #88
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    OK, then we go for doing the same test.

    W7 would be perfect, I'll see what computer I can find, might be a laptop but that shouldn't make a difference.

    I'll PM you some more info on my utility later tonight and we'll take it from there.
    We'll just have to make sure that we're able to somehow retrieve SMART info from the SSDs/disk controllers, it might work form my utility, not sure yet.

    edit:

    It shouldn't be a problem using the filesizes that Ao1 suggested, 4K writes QD1 are at 35-40MB/s on a fresh X25-V, could change dramatically within a few hours though.
    As part of the test we could also copy/delete files from the Windows directory over and over again.
    Last edited by Anvil; 05-13-2011 at 10:41 AM.
    -
    Hardware:

  14. #89
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Anvil View Post
    OK, then we go for doing the same test.

    W7 would be perfect, I'll see what computer I can find, might be a laptop but that shouldn't make a difference.

    I'll PM you some more info on my utility later tonight and we'll take it from there.
    We'll just have to make sure that we're able to somehow retrieve SMART info from the SSDs/disk controllers, it might work form my utility, not sure yet.

    edit:

    It shouldn't be a problem using the filesizes that Ao1 suggested, 4K writes QD1 are at 35-40MB/s on a fresh X25-V, could change dramatically within a few hours though.
    As part of the test we could also copy/delete files from the Windows directory over and over again.
    We need a way to automate everything... copying/deleting files by hand is not an option at all.

  15. #90
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Sure.

    We'll just need some way to monitor that it's still working and for reporting, so, a bit depending on how we can get to that SMART info it should be fully automatic, including reporting.
    -
    Hardware:

  16. #91
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    i think that using the Jedec standards would be great. looks to be very exciting, cant wait to see this test
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  17. #92
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    Quote Originally Posted by One_Hertz View Post
    No? Expand?
    The MFT reserved area is smaller on an SSD than on an HDD of a similar size, thus more of an SSD is used, whereas on an HDD certain are of the drive (10% even) might never be touched unless the drive gets filled >90%.
    The log does not record last access time on an SSD by default, whereas it does on an HDD (W7 only).

    The area where data is written to on an SSD is different than that on an HDD because the starting format is not the same. There is quite a bit of difference. It's not 50% or probably not even 20% difference in terms of amount, but with random I/O it can be >10%.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  18. #93
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by alfaunits View Post
    The MFT reserved area is smaller on an SSD than on an HDD of a similar size, thus more of an SSD is used, whereas on an HDD certain are of the drive (10% even) might never be touched unless the drive gets filled >90%.
    The log does not record last access time on an SSD by default, whereas it does on an HDD (W7 only).

    The area where data is written to on an SSD is different than that on an HDD because the starting format is not the same. There is quite a bit of difference. It's not 50% or probably not even 20% difference in terms of amount, but with random I/O it can be >10%.
    I still don't know what you mean, but yes I forgot the W7 SSD optimizations.

  19. #94
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    I mean the data written is not the same, in reply to "The setup doesn't matter in determining these numbers" - the setup does matter.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  20. #95
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    You guys up and running yet?

    If not could you run the test file for a hour first and then use Intel's method to calculate the projected wear rate based on SMART values?

    This could then be compared to what happens as the experiment continues.

    The worst possible endurance scenario for the 40GB 320 is 5TB, which could be racked up in 3 days @ 20MB/s 24/7. Once speeds slow down to a crawl maybe it would last a couple of months.

    I can see this lasting at least 6 months, maybe a year even.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	1.png 
Views:	617 
Size:	21.8 KB 
ID:	114379  

  21. #96
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Yeah, we're up and running.

    I started a bit ahead of One_Hertz, his 25nm 320 looks to be a bit faster than my 34nm.

    I'll start a new thread shortly.

    It could take some time , but, E9 "Wear-out" has already moved down a few notches on my drive, don't know about One_Hertz yet.
    -
    Hardware:

Page 4 of 4 FirstFirst 1234

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •