MMM
Page 3 of 11 FirstFirst 123456 ... LastLast
Results 51 to 75 of 265

Thread: SSD roundup: Vertex 3 vs M4 vs C300 vs 510 vs 320 vs x25-M vs F120 vs Falcon II

  1. #51
    Registered User
    Join Date
    Apr 2011
    Posts
    34
    Quote Originally Posted by Ao1 View Post
    If you only read/ write across a limited span on the SSD you get very good results. Wear levelling can kick in. If you read/ write across the complete span of the SSD it will really struggle. Off the top of my head Intel give specs based on an 8GB span. They also now give data for performance across the full span and IOPs drop like a stone. 300 or 400 from recollection.

    If you write lots of small files and then overwrite them I think it also slows things down when compared to over writing larger files.

    Benchmarking SSD's is a nightmare.
    1st ... sorry for my very bad english

    The problem here is with Vertex 3 only, from what i've see yet. Perhaps SandForce SF-1200 to but i haven't tested.

    Random Read 4KB on M4 256 GB (for example) is 58 MB/s on QD3 even if you test it on 1 GB file, 8 GB file (and so on).
    Random Read 4KB on Vertex 3 240 GB is 72 MB /s on 1 GB file, 72 MB/s on 2 GB file, 52 MB/s on 4 GB file, 44 MB/s on 8 GB file and 44 MB on 16 GB file.

    In this case the size of the file = the size on the NAND since i use incompressible data on IOMETER (random)

    But if you use highly compressible data (not very realistic, but ...) , with a 8 GB file you get 74 MB/s. Why ? Because the 8 GB file is higly compressed by the SandForce, and perhaps take something like 2 GB of real nand or less.
    You didn't test random access on 8 GB but on something like 2 GB.
    If i test random access within 45GB file of highly compressible data for example i get 45 MB/s.

    The fact is that if you compare random read of Vertex 3 vs M4 using a (very) small portion of the SSD, then Vertex 3 show better random read than M4. But on a bigger part of the SSD, the M4 is faster.

    For me, performance on a very small part of a SSD is less important. Random read is important for heayy multitasking, and heavy multitasking could need to read data from a large part of the SSD.

  2. #52
    Registered User
    Join Date
    Apr 2011
    Posts
    34
    Quote Originally Posted by Ao1 View Post
    That is why it is good that Marc ran real life applications and timed them. He is waiting for a mod to activate his account and then he will hopefully join us.
    For a lot of thing SSD are ... CPU limited

    For example if i go from i7-2600K to i7-2600K @ 4.5 GHz on Crysis 2 level loading
    WD 2 To Black : 21.5s => 19.2s (-2.3s)
    X25-M 120 : 18.1s => 15.5s (-2.6s)
    Vertex 3 240 : 17.1s => 14.4s (-2.7s)

    In heavy multitasking perhaps we can have some (small) difference but it's hard to find a realistic case that i can use to benchmark.

    For example during the bench i've tried to combine the launch of 3ds/photoshop/word/excel (wich in fact is already heavy) with a file copy (read+write) limited at 5 MB/s, i don't get any difference at all.

    At the end, more than CPU limited, i think modern SSD are now "user" limited
    Last edited by Marc HFR; 04-21-2011 at 10:14 AM.

  3. #53
    Xtreme Enthusiast
    Join Date
    Dec 2002
    Posts
    758
    Quote Originally Posted by Ao1 View Post
    ^^ A significant portion of I/O activity occurs within the windows file system without touching the storage device.

    I think the objective of fancy cache is to better manage the Windows file system cache by using different algorithms to try and keep useful data in cache, thus limiting the requirements to re- read from the device to obtain dropped cache data.

    Better cache management can in theory be very beneficial if the storage device is slow or if the cache management program can be specially tweaked for a particular application. I've tried fancy cache with SSD and it made no perceivable performance difference to me.

    Windows does an excellent job at managing cache for general use. Of course there will always be situations when a more focused cache management system could do better for a particular task, but it might at the same time also slow things down elsewhere in the process.

    SSD's are so fast that it negates the benefit that might be seen with HDD.

    I'm surprised you are seeing more writes. Normal I would say it's a 10 to 1 (or less) ratio between reads and writes. Do you do a lot of tasks that write data?

    Regarding write caching, don't forget that SSD's also combine writes to reduce wear. This happens within the SSD itself.

    My 2c anyways. Maybe there is a better explaination on cache OS/ Cache programes work.
    Again, it's not a special setup or usage pattern that's hitting writes harder, it's bog standard on both, regular xp x64 doing regular use, browsing, gaming, movies, text editing, etc. It doesn't take long to see the write heavy usage pattern of windows, install fancycache and keep the performance statistics window open. Fancycache is better than drive write combining, it can combine more because it can be set to delay writes longer and on a larger pool of data(my setup being ten minute write delay on a 2gb cache.)

    >SSD's are so fast that it negates the benefit that might be seen with HDD.

    There is a point of diminishing returns, fancycache gets you there without having to wait for the next gen SSDs or spending a few grand on a caching controller and SSD RAID setup or high priced PCIe SSD drive.

  4. #54
    Xtreme Enthusiast
    Join Date
    Feb 2009
    Location
    Montreal
    Posts
    791
    So I'm still a bit confused about the take-home message from this thread. When I discuss SSD purchases with my friends, what am I going to recommend them for the future? A vertex 3 120/240gb or a m4 128/256gb?

  5. #55
    Registered User
    Join Date
    Apr 2011
    Posts
    34
    Quote Originally Posted by antiacid View Post
    So I'm still a bit confused about the take-home message from this thread. When I discuss SSD purchases with my friends, what am I going to recommend them for the future? A vertex 3 120/240gb or a m4 128/256gb?
    The cheaper

  6. #56
    Xtreme Addict
    Join Date
    May 2006
    Posts
    1,315
    So are there still issues running fancy cache with RAID0'd SSDs? Many folks talked about corruptiong their arrays? I haven't kept up with it since I read those stories.

    I'd love to give it a shot, but don't want another hassle on my hands....
    MAIN: 4770K 4.6 | Max VI Hero | 16GB 2400/C10 | H110 | 2 GTX670 FTW SLi | 2 840 Pro 256 R0 | SB Z | 750D | AX1200 | 305T | 8.1x64
    HTPC: 4670K 4.4 | Max VI Gene | 8GB 2133/C9 | NH-L9I | HD6450 | 840 Pro 128 | 2TB Red | GD05 | SSR-550RM | 70" | 8.1x64
    MEDIA: 4670K 4.4 | Gryphon | 8GB 1866/C9 | VX Black | HD4600 | 840 Pro 128 | 4 F4 HD204UI R5 | 550D | SSR-550RM | 245BW | 8.1x64

  7. #57
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Welcome to the forum Marc

    +1
    The cheapest or the one that's available.
    (m4's are hard to find)
    -
    Hardware:

  8. #58
    Xtreme Enthusiast
    Join Date
    Dec 2002
    Posts
    758
    Quote Originally Posted by Brahmzy View Post
    So are there still issues running fancy cache with RAID0'd SSDs? Many folks talked about corruptiong their arrays? I haven't kept up with it since I read those stories.

    I'd love to give it a shot, but don't want another hassle on my hands....
    Not having a RAID SSD setup, I don't know.

  9. #59
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Marc HFR View Post
    For a lot of thing SSD are ... CPU limited

    For example if i go from i7-2600K to i7-2600K @ 4.5 GHz
    WD 2 To Black : 21.5s => 19.2s (-2.3s)
    X25-M 120 : 18.1s => 15.5s (-2.6s)
    Vertex 3 240 : 17.1s => 14.4s (-2.7s)

    In heavy multitasking perhaps we can have some (small) difference but it's hard to find a realistic case that i can use to benchmark.

    For example during the bench i've tried to combine the launch of 3ds/photoshop/word/excel (wich in fact is already heavy) with a file copy (read+write) limited at 5 MB/s, i don't get any difference at all.

    At the end, more than CPU limited, i think modern SSD are now "user" limited
    Hi Marc, welcome to XS.

    That does not surprise me. It's beyond my understanding to know how I/O processing works with either hardware of software; but both seem to like to wait for an I/O to finish before the next I/O is requested from the storage device. How quick that I/O is processed by the CPU/ RAM would therefore seem to have an impact.

    Right now I'd say if you are using HDD an SSD would be the best upgrade, but if you already have an SSD a faster CPU would be the better upgrade choice. (Assuming you would have to choose one or the other )

    Reliability is also a big issue for me and would be my number one choice, even over price.

  10. #60
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by keiths View Post
    Again.......
    Not sure if you missed it but F@32 created a thread about FancyCache:

    http://www.xtremesystems.org/forums/...d.php?t=267879

  11. #61
    Xtreme Enthusiast
    Join Date
    Dec 2002
    Posts
    758
    Quote Originally Posted by Ao1 View Post
    Not sure if you missed it but F@32 created a thread about FancyCache:

    http://www.xtremesystems.org/forums/...d.php?t=267879
    The xs threads on fancycache are where I found out about fancycache.
    Last edited by keiths; 04-21-2011 at 09:51 AM.

  12. #62
    Xtreme Addict
    Join Date
    Jul 2008
    Location
    US
    Posts
    1,379
    Quote Originally Posted by Brahmzy View Post
    So are there still issues running fancy cache with RAID0'd SSDs? Many folks talked about corruptiong their arrays? I haven't kept up with it since I read those stories.

    I'd love to give it a shot, but don't want another hassle on my hands....
    I've got no issues with FancyCache and my R0 array. I've been using it for months without so much as a hiccup.

    --Matt
    My Rig :
    Core i5 4570S - ASUS Z87I-DELUXE - 16GB Mushkin Blackline DDR3-2400 - 256GB Plextor M5 Pro Xtreme

  13. #63
    Registered User
    Join Date
    Apr 2011
    Posts
    34
    Quote Originally Posted by Ao1 View Post
    Reliability is also a big issue for me and would be my number one choice, even over price.
    From the number i get from a big french etailer, for the reliability we got

    Intel X25-M > Crucial C300 > OCZ Vertex 2

    From 6 to 12 month of use, X25-M is at 0.3% return rate, vs 1% on C300 and 3.6% on Vertex 2.

    Hope this help.

  14. #64
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Marc HFR View Post
    Random Read 4KB on M4 256 GB (for example) is 58 MB/s on QD3 even if you test it on 1 GB file, 8 GB file (and so on).
    Random Read 4KB on Vertex 3 240 GB is 72 MB /s on 1 GB file, 72 MB/s on 2 GB file, 52 MB/s on 4 GB file, 44 MB/s on 8 GB file and 44 MB on 16 GB file.
    Thanks for posting that observation. That might explain something that I have been wondering about since the first reviews of the V3 came out. With CDM, the 4KB QD1 random reads are around 35 MB/s, but with AS-SSD, they are only 20 MB/s. With your observation, this could be explained by AS-SSD performing random reads over a larger span than CDM.

    Or maybe not. I think AS-SSD uses a 1GB test file, and CDM is often run with a 1GB test file, which would suggest that the spans are not the source of the discrepancy. I suppose AS-SSD might use full-span for random reads (ignoring its test file), since for reads, the test file is not necessary. But I don't know. Maybe this could be determined with hIOmon? Does it record the LBAs that are accessed?

  15. #65
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I've been meaning to check that for a while. Will do it now.

  16. #66
    Registered User
    Join Date
    Apr 2011
    Posts
    34
    Quote Originally Posted by johnw View Post
    Maybe this could be determined with hIOmon? Does it record the LBAs that are accessed?
    I think so

  17. #67
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by johnw View Post
    Thanks for posting that observation. That might explain something that I have been wondering about since the first reviews of the V3 came out. With CDM, the 4KB QD1 random reads are around 35 MB/s, but with AS-SSD, they are only 20 MB/s. With your observation, this could be explained by AS-SSD performing random reads over a larger span than CDM.

    Or maybe not. I think AS-SSD uses a 1GB test file, and CDM is often run with a 1GB test file, which would suggest that the spans are not the source of the discrepancy. I suppose AS-SSD might use full-span for random reads (ignoring its test file), since for reads, the test file is not necessary. But I don't know. Maybe this could be determined with hIOmon? Does it record the LBAs that are accessed?
    Hi johnw,

    The short answer to both of your questions is "Yes".

    Along these lines, please see the hIOmon thread post #297 where I used the hIOmon "I/O Operation Trace" feature along with the hIOmon "Summary" metrics feature in analyzing only the "Access Time" option of AS SSD.

    The hIOmon "I/O Operation Trace" feature option enables you to collect, display, and export an individual record for each and every I/O operation for those I/O operations specified to be monitored by hIOmon. This "I/O Operation Trace" record can include the starting address (basically LBA in the case of device I/O operations) of the data transfer and the length of the data transfer.

    The hIOmon software also provides an "Access Range Summary" option, which enables you to create a CSV-based export file where each record/row within this export file contains the "Access Range Summary" summarized I/O operation performance metrics for a separate, distinct "Access Range" associated with a specific "physical" device. An "Access Range" is an address span to which read and/or write I/O operations have performed data transfers as observed by the hIOmon I/O Monitor. BTW, the "Access Range Summary" metrics are based upon the "I/O Operation Trace" metrics that have been collected by hIOmon.

    But that's another story.

  18. #68
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by overthere View Post
    The short answer to both of your questions is "Yes".
    I'm not sure I follow. There is a wealth of information in your posts, so I might be missing (or misinterpreting) something.

    Did your previous posts specify the span (range) of I/O accesses for AS-SSD 4KB random READ QD=1? If so, does it span the full SSD, part of the SSD, or only the 1GB test file?

    Also, it looks like your posts #283 and #285 in the hIOmon thread are identical. Was there supposed to be a difference?

  19. #69
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    AS SSD 1.5.3784.37609 - 4K Test Only

    1 - Test file - Write
    Sectors - 98668848 to 100764976
    2, 049 O.5 MB xfers. Total write = 1024.5MB

    2nd - 4K Writes
    Sectors - 98668848 to 100765984
    175,923 4K xfers. Total write = 687.19MB

    3rd - 4K Reads
    Sectors - 98668848 to 100765952
    42,609 4k xfers. Total read 166.43MB

    The above is an approximation. There was a bit of activity outside of the sectors above but I excluded it as I suspect it was OS related.

    Basically it looks like AS SSD conditions a small span of the drive with 0.5MB writes. It then reads and writes within the same sectors.
    Now for CDM
    Last edited by Ao1; 04-21-2011 at 01:20 PM. Reason: Corrected mistake - See post below

  20. #70
    Registered User
    Join Date
    Apr 2011
    Posts
    34
    Quote Originally Posted by Ao1 View Post
    AS SSD 1.5.3784.37609 - 4K Test Only

    1 - Test file - Write
    Sectors - 98668848 to 100764976
    2, 049 O.5 MB xfers. Total write = 361.5MB
    For me 2049 0.5 MB xfers = 1024,5 MB

  21. #71
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by johnw View Post
    I'm not sure I follow. There is a wealth of information in your posts, so I might be missing (or misinterpreting) something.

    Did your previous posts specify the span (range) of I/O accesses for AS-SSD 4KB random READ QD=1? If so, does it span the full SSD, part of the SSD, or only the 1GB test file?

    Also, it looks like your posts #283 and #285 in the hIOmon thread are identical. Was there supposed to be a difference?
    Sorry for the confusion.

    The hIOmon thread post #283 briefly discusses only write I/O operations.

    The subsequent hIOmon thread post #285 briefly mentions only read I/O operations and notes that AS SSD performed its read I/O operations to the physical device directly at the physical device level within the OS I/O stack.

    The hIOmon thread post #297 provides an overall summary with some accompanying details. It emphasizes again that AS SSD performed its read I/O operations (which were random) directly to the physical device (in fact, hIOmon observed only two file read I/O operations, both of which had a data transfer length of zero for the "AS-SSD-TEST42\test.bin" test file).

    It is important to note that all of the above posts deal only with the case where the AS SSD "Access Time" option alone is selected. The other AS SSD test run options (e.g., 4K) were not addressed/discussed within these posts (since at the time Ao1 was only concerned with the "Access Time" option, if I recall correctly).

    In any case, my above posts did not make any mention of the span (range) of I/O accesses performed by AS-SSD. The I/O operation trace that I captured using hIOmon did include the addresses accessed, but I did not do any analysis of these addresses at that time.
    Last edited by overthere; 04-21-2011 at 12:49 PM. Reason: Corrected spelling; added direct links

  22. #72
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    CDM 2.2 - 4K Test Only Test 5 test size 50MB

    1st - 1MB Writes
    Sectors - 2790048 to 85187656
    45 1MB xfers. Total write =45 MB

    2nd - 4K Reads
    Sectors 2788704 - 52015696 - (Sector order totally random, but read sectors were read again - completely randomly around 12 times).
    168,095 4K xfers. Total read = 656 MB

    3rd - 4K Write
    Sectors 2788704 - 85189760 - (Sector order totally random, but written sectors were wrote to again - completely randomly around 3 or 4 times).
    19,615 4K xfers. Total read = 76 MB

    Again only a rough outline. Xfers outside of 1MB and 4K were removed. It was not that much data however.

    It seems that sectors are also precondition but with 1MB writes, although the sectors are completely random and over a much wider span. At a guess the preconditioned sectors were the locations of 4K reads and writes.

    Either way it is quite different to AS SSD
    Last edited by Ao1; 04-21-2011 at 01:24 PM.

  23. #73
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Marc HFR View Post
    For me 2049 0.5 MB xfers = 1024,5 MB
    Yep sorry. I re-ran it to make sure.

  24. #74
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post

    2nd - 4K Reads
    Sectors 2788704 - 52015696 - (Sector order totally random, but read sectors were read again - completely randomly around 12 times).
    168,095 4K xfers. Total read = 656 MB
    Great data, thanks for doing that test!

    52015696 - 2788704 = 49,226,992 sector span = 23.5 GB (assuming 512B sectors).

    So, AS-SSD has a 1GB span, and CDM has a 23.5GB span, and yet CDM measures a HIGHER 4KB QD=1 random read value. Hmmmmm.

    I do not understand your comment above about "read sectors were read again". Do you mean that of the 656 MB, only 656 / 12 MB is unique sectors, and each sector is read 12 times? Or something else?

  25. #75
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    For example (with CDM)

    Sector 2788984 was read 8 times and written to twice, but on a completly random basis.

    Sector 52005656 was read 18 times and written to once, again randomly.

    Sector 33933672 was read 16 times with no writes, again randomly. Edit - no 4k writes. I did not check to see if the sector had been written to with the 1MB writes.

    Edit: Also the sectors were banded. For example:

    85181704 to 85189760
    51198048 to 59935488
    Last edited by Ao1; 04-21-2011 at 02:09 PM.

Page 3 of 11 FirstFirst 123456 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •