Page 13 of 16 FirstFirst ... 310111213141516 LastLast
Results 301 to 325 of 376

Thread: hIOmon SSD Performance Monitor - Understanding desktop usage patterns.

  1. #301
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by johnw View Post
    So, then who was your post directed to? I did not see anyone who was confused.
    My post #297 started:
    "I might have inadvertently led some folks to think based upon my prior posts #283 and #285 that AS SSD first performs its write testing followed by its read testing when only its "Access Time" option is selected."

    My post #297 was not directed to any specific person (otherwise I would have quoted or mentioned same).

    I was simply trying to make clear that the sequence of my posts ("writes" discussed before "reads") did not reflect the actual occurrence of same.

    In any case, I apologize if my attempts at clarification in turn caused any consternation amongst anyone.

    And I assume that your post above was directed towards me.

  2. #302
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by overthere View Post

    Here is the actual sequence as observed by hIOmon.........
    Hi overthere,

    This seems to explain a couple of observations I have made. When running the access time bench on Raptor I can see the head spanning right across the platter in step 1. (Raptors have a Perspex viewing cover, but if you can't see it you can certainly hear it ). In action 2 step 2 the heads remain localised.

    I've been running a lot of access and 4K benchmarks with AS SSD lately and in the process I've seen some quite large variations in write speeds. What puzzled me was that after running a number of benchmarks in succession write speeds would drop, but then I'd occasionally get a much higher score, followed by a much lower score.

    I can only assume that when higher write speeds occurred it was due to action 2 step 2 occurring at a different location containing clean blocks/sectors.
    Last edited by Ao1; 03-06-2011 at 03:48 AM.

  3. #303
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Ao1 View Post
    ..
    I feel privileged that overthere has contributed to this thread. It is rare that someone with such knowledge takes time out to help people learn.
    Kind words, but I am just amongst the many others who visit this forum to help as they can.

    And I am also here to learn as well from the experts and users who share their time, knowledge, and experiences on this forum.

    This applies to hIOmon too. hIOmon is software tool. I know a bit about how it works , but I certainly am not an expert in regards to all of the different settings/scenarios/systems where this tool can be used.

    So I am keenly interested to learn and understand what users like Ao1 and Anvil uncover and discover as they explore with hIOmon.

  4. #304
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    I feel privileged that overthere has contributed to this thread. It is rare that someone with such knowledge takes time out to help people learn. From some of your posts you also seem to have a good understanding of storage and I was looking forward to your contributions.

    I'm going to delete this post later as it has nothing to do with the thread. I just wanted to express how important this thread is to my quest to better understand how storage works.
    I agree that most of overthere's posts to this thread have been helpful. I'm not sure why you addressed this to me instead of him. Both of you have now written ambiguous posts where you circle around what you want to say, but do not explicitly write it.

    If you think someone is confused, then write WHO is confused and what about. Like this:

    Ao1: I think you are confused about my recent exchange with overthere. I was just trying to clarify who he thought was confused, and what about. I was not attacking overthere in any way. There is no need for you to defend him to me. If you want to praise him, great, but please do not address such a post to me!

  5. #305
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    To expand on the variation in the 4k write performance at QD1 that I could see following repeated testing using AS SSD. (I observed variations in write speeds from 30MB/s to 47MB/s)

    If a drive is preconditioned naturally or synthetically by writing across the full span of the logical disk it will at some stage start to wear level and swap out dirty blocks with clean blocks previously hidden within over provisioning capacity. This may be why I was able to see a degraded drive suddenly be able to produce better results , which then went on to degrade in the next benchmark.

    If that is correct and a reviewer does not precondition a drive by writing to the full raw capacity it might result in a benchmark result appearing better than it should.

    This raises a few questions for me. How many writes are needed before the blocks get rotated and how many blocks get swapped over at any one time?

    I'd guess it varies between different SSD's but without this knowledge how can you be sure you are benchmarking only preconditioned blocks?

    If a drive is in constant use does it allow data to be written to clean over provisioned blocks and then swap the block over at a later date? If so that would also perhaps inflate a "used state" benchmark.

  6. #306
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    This raises a few questions for me. How many writes are needed before the blocks get rotated and how many blocks get swapped over at any one time?

    I'd guess it varies between different SSD's but without this knowledge how can you be sure you are benchmarking only preconditioned blocks?
    I like bit-tech's method. It reminds me of the line "nuke the entire site from orbit -- it's the only way to be sure!".

    http://www.bit-tech.net/hardware/201...-ssd-preview/2

    To simulate a protracted heavy workload, we then connect the drives to a secondary system running without TRIM support and copy the entire 100GB contents of the C drive over to the SSD. These files include operating system files, multiple game installs, MP3s and larger video files – the typical contents of a modern hard disk. Once the write to the SSD is completed, these files are then deleted and the process is repeated ten times, resulting in a total write of over 500GB to our SSDs.
    Last edited by johnw; 03-06-2011 at 06:37 PM.

  7. #307
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I think preconditioning is also dependent on the type of data written. Large files by all accounts do not degrade a drive in the same way that small files do.

    Also reviewers tend to omit benchmarks with mixed random read and write tests, which result in significantly reduced IOPs.

    Here are a few extracts from various industry presentations.

    The last image appears to show that different drives require different preconditioning. Drive size and over provisioning allowance would also have a significant impact.

    The second implies that the impact of mixed reads/ writes is significant, yet mostly ignored.

    Lastly the first image implies at least three hours of heavy writing is required to get a drive into s steady state.

    Links 1, link 2, link 3

    There is also some interesting info over at IMEX
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	2.png 
Views:	352 
Size:	110.8 KB 
ID:	112715   Click image for larger version. 

Name:	1.png 
Views:	349 
Size:	29.6 KB 
ID:	112716   Click image for larger version. 

Name:	XX.gif 
Views:	352 
Size:	113.2 KB 
ID:	112717  
    Last edited by Ao1; 03-07-2011 at 12:49 AM.

  8. #308
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    Also reviewers tend to omit benchmarks with mixed random read and write tests, which result in significantly reduced IOPs.
    Yes, it would be nice if AS-SSD would add a 50/50 or 70/30 mixed read/write test. Of course it can be done with IOMeter, but the nice thing about AS-SSD is that it standardizes the tests, so that various setups can be easily compared without wondering whether the person running the test has set it up in exactly the same way.

  9. #309
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is another view from Sandforce.

    Sandforce state from 2 to 48 hours to reach a steady state depending on SSD and test.

    Performance will also vary if you write 4k files and then switch to a larger file size.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	YYY.gif 
Views:	354 
Size:	152.5 KB 
ID:	112718  

  10. #310
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I have read that SandForce document some time ago, a few years old but still valid, lots of great info.

    Mixing read/write is crucial, thats how most of us work and the standard benchmarks don't do that.
    IOmeter/SQLIO both are great for testing this.

    I'm almost done creating the basic iometer configs and have so far tested some of the most common SSDs.
    The idea was to get the same tests performed on used SSDs (not just mine) and then collect values for variance.

    Here's an example using the File Server pattern, lots of block sizes and 80% read.

    Fileserver_2011_03_06_SSD.jpg
    (the rest is coming up in a different thread)
    -
    Hardware:

  11. #311
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Ao1 View Post
    ...
    Here are a few extracts from various industry presentations. ...
    Hi Ao1,

    Thanks for mentioning industry efforts in regards to SSD performance testing.

    The Storage Networking Industry Association (SNIA) also has had a notable ongoing effort for some time. Please see the "Solid State Storage Initiative (SSSI)":

    http://www.snia.org/forums/sssi/

    And in particular, the SNIA "SSS Technical Work Group (TWG):

    http://www.snia.org/forums/sssi/programs/twg

  12. #312
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Hi Anvil, I hope we are going to see the snazzy 3D surface profile graphs

    If you have chance have a look at the Iometer benchmarks from the Vertex 2 EX. There were produced by DDRdrive for the 2010 Open Storage summit. They seem to have gone out there way to expose the Achilles heel, but it reveals some interesting results nonetheless.

    Interesting also to see the hex code for the "repeating byte" test

    Edit: The benchmarks linked above were without trim. Benchmarks were also run with trim:

    'Specifications are knowingly disingenuous'
    Regarding TRIM, a means for Windows to tell the SSD controller about deleted data in the SSD, DDRdrive's George said: "I did run the exact same tests on Windows 7 (which does support TRIM) and the OCZ results do marginally increase. But the overall trend line stays constant; *dramatic* IOPS degradation relative to published specifications. Basically, unless one uses a development version of Iometer, which happens to default to a contrived data pattern, the published specifications are knowingly disingenuous."

    The Register




    Hi overthere. Thanks for the links.

    Edit: I guess you have seen this?
    Last edited by Ao1; 03-08-2011 at 01:23 AM.

  13. #313
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Ao1 View Post
    Hi overthere. Thanks for the links.

    Edit: I guess you have seen this?
    Basically, the short answer is: Yes, in some form and from other sources.

  14. #314
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is a comparison of Black Ops Multi Player with and without FancyCache. I reloaded the same map throughout. 3GB level 1 cache. Block size 64KB. Level 2 cache 128MB Invisible memory enabled. Defer write disabled.

    I'll run gain later to see if the max read response time was just a blip.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	fancy cache.png 
Views:	304 
Size:	84.7 KB 
ID:	112840   Click image for larger version. 

Name:	without.png 
Views:	306 
Size:	71.0 KB 
ID:	112841  

  15. #315
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Looks like the 2s response time was a blip. As well as capturing the overall stats I also collected stats for the specific Black Ops folder. (The excel files are too large to post). This gave me around 7,000 different I/O entries. Out of those entries I averaged the I/O performance.

    Based on the I/O's related only to Black Ops there is little difference in performance between FancyCache on or off, which ties in with the fact that I didn't notice any improvement in game loading or play. As I loaded the same map each time this was quite disappointing.

    (The second run with FancyCache was after a reboot).
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	correct.png 
Views:	295 
Size:	76.9 KB 
ID:	112848   Click image for larger version. 

Name:	repair.png 
Views:	271 
Size:	14.2 KB 
ID:	112856  
    Last edited by Ao1; 03-12-2011 at 02:07 PM.

  16. #316
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Same as above on the same OS image, but on HDD (Seagate Barracuda 160GB). The first level load was noticeably really slow. (Both with and without FancyCache). After that it was hard to tell the difference between HDD & SSD. It seems that Windows does quite a good job at cachng files.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	hdd cache.png 
Views:	292 
Size:	81.7 KB 
ID:	112854   Click image for larger version. 

Name:	the end.png 
Views:	286 
Size:	9.9 KB 
ID:	112857   Click image for larger version. 

Name:	Untitled.png 
Views:	280 
Size:	95.1 KB 
ID:	112858  
    Last edited by Ao1; 03-12-2011 at 02:12 PM.

  17. #317
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here I look at file transfers, with and without NCQ. It seems that 4k reads/ writes at qd64 are not the only thing to benefit from NCQ.

    Multiple file = 1.37GB, 174 files, 6 folders, jpegs of various sizes. (Edit: all copied across in one folder).
    AVI = 1.36GB

    Edit: The file transfers were on the same disk.

    From a MSc thesis by Lars Pedersen

    "Surprisingly, some SSD drives supports NCQ, although obviously not due to mechanical overhead and positioning delay scheduling. Intel found that SSD disks are so fast that NCQ is needed to compensate for latency encountered in the host system. Clearly, device latency is not an issue for SSD disks".
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Summary Xcel AVI.png 
Views:	263 
Size:	51.7 KB 
ID:	112941   Click image for larger version. 

Name:	Summary Xcel multiple files.png 
Views:	259 
Size:	53.0 KB 
ID:	112942  
    Last edited by Ao1; 03-17-2011 at 02:13 PM.

  18. #318
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    AVI file transfer
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	SSD no NCQ AVI.png 
Views:	257 
Size:	71.2 KB 
ID:	112943   Click image for larger version. 

Name:	SSD with NCQ AVI.png 
Views:	256 
Size:	66.2 KB 
ID:	112944  
    Last edited by Ao1; 03-17-2011 at 02:14 PM.

  19. #319
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Multiple file transfer
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	SSD no NCQ multi (2).png 
Views:	257 
Size:	65.5 KB 
ID:	112948   Click image for larger version. 

Name:	SSD with NCQ multi (2).png 
Views:	257 
Size:	113.5 KB 
ID:	112949  

  20. #320
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    These entries to the physical device were generated by simply hovering the mouse over the file. This is on a non OS drive with no AV. It doesn't happen to every file you hover over, so it's a bit random but it does appear that hovering a mouse over a file is enough on some occasions to generate a read in anticipation of the file being opened. Intelligent read ahead I guess.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	A.png 
Views:	241 
Size:	27.3 KB 
ID:	112976  

  21. #321
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    Depemds on the file. For every file, Windows will get its file information (not read the file data), when the mouse hovers over it - it's not random, it depends how the mouse is set up. When a "stop" is detected, the file under the mouse is queried.
    For executables, Explorer will also read the file version data, thus reading several parts of the file.
    For multimedia, it will read a good deal of data to make a preview. It's not a read-ahead.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  22. #322
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Thanks alfaunits. Edit:The last entry on the excel sheet is a jpeg and the read data transfer size is exactly the same file size as the jpeg. When I say read ahead what I mean is that the act of hovering the mouse over the file was enough for Windows to read the data from the device in advance of the file actually being executed. Am I seeing that incorrectly?

    I'm still struggling to understand the process of a large file transfer.

    At a best guess.....

    A large file is constructed from multiple file sections that are linked by records that allow contiguous reading. On a HDD these files are placed close together to minimise access time. Over time they may become fragmented, which slows down the read time. On an SSD sections of the file are placed randomly and are moved around for wear levelling. To read the file the SSD therefore needs to locate all the file records related to the file, which are distributed randomly over the SSD.

    To speed up this process NCQ enables up to 32 commands, which can be executed in parallel. I'm assuming that this is why in post #137 (with NCQ enabled) that the read time was the same for a large file and multiple (unlinked) smaller files. In other words contiguous reading was enabled by commands being executed in parallel rather than serially as in the case of HDD. The net result is that latency between reads is reduced. Edit: the benifit of NCQ is not therfore limited to higher qd's.

    If the maximum read transfer size is 104,8576 I'm assuming that one IOP count is required to read that portion of data. If the same size section of data is read multiple times the read IOP count is multiplied and the combined read transfer length equals the sum of the read IOP count multiplied by the size of the data blocks being transferred.

    Based on the above I'm assuming that larger files have larger and more consistent read transfer sizes, so the IOP count is less for a large sequential transfer when compared to multiple small files. I.E. the average of the transfer size block is much larger than say a smaller file transfer.

    This however does not seem to not be the case with the AVI file transfer. Presumably the read format structure is the same, yet with NCQ the read IOP count is considerably higher to read the same amount and format structure of data.

    Now I'm stuck. I can't understand why that would be the case.
    Last edited by Ao1; 03-19-2011 at 06:36 AM.

  23. #323
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Ao1 View Post
    These entries to the physical device were generated by simply hovering the mouse over the file.... This is on a non OS drive with no AV. It doesn't happen to every file you hover over, so it's a bit random but it does appear that hovering a mouse over a file is enough on some occasions to generate a read in anticipation of the file being opened. Intelligent read ahead I guess.
    Hi Ao1,

    The "AcroRd..." process from Adobe is another "application" that can elicit such activity.

    Some additional details along these lines can be found within the "File I/O Activity (Undercover)" presentation document available within the Documentation section of the hyperI/O web site.

  24. #324
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Ao1 View Post
    ... I'm still struggling to understand the process of a large file transfer.

    ... This however does not seem to not be the case with the AVI file transfer. Presumably the read format structure is the same, yet with NCQ the read IOP count is considerably higher to read the same amount and format structure of data.

    Now I'm stuck. I can't understand why that would be the case.
    One important factor to consider for large file transfers (and ostensibly any file I/O activity) is the rather intricate interplay amongst the various levels of the OS I/O stack.

    As previously mentioned (and not to belabour the point), the hIOmon software can optionally monitor I/O operation activity at three distinct levels (concurrently) within the Windows OS I/O stack: the file level (logical disk), the physical volume level, and the physical device level.

    The metrics shown within your post #371 above reflect I/O operations performance metrics that were collected by hIOmon at the physical volume level (specifically "\Device\HarddiskVolume3"). In one sense, the physical volume level is an "intermediate" level between the file-level (logical disk) and the physical device level.

    Now in regards to "data transfer size/length" alone, take a 10 MiB file (for instance). An application could use a single read I/O operation to transfer the entire file in one fell swoop.

    However, this single read file I/O operation could be broken up into ten separate 1 MiB read I/O operations further down at the physical volume level - so the hIOmon software would accordingly observe 10 "PhyDev" read I/O operations (each with a data transfer size of 1 048 576 bytes) at this level within the OS I/O stack.

    And yet further down at the physical device level, these ten 1 MiB read I/O operations could again be broken up into smaller data transfer sizes/lengths (e.g., 128KiB), which in turn results in multiple, "additional" read I/O operations as observed by the hIOmon I/O Monitor (that is, in contrast to the associated number seen at the physical volume level and file level).

    Of course, there are other factors that might also come into play, such as file fragmentation (as you suggested), other concurrent I/O operation activity (e.g., by other processes), etc.

    In any case, to your point about "... yet with NCQ the read IOP count is considerably higher to read the same amount and format structure of data", it appears to me that the read IOP count is roughly about 700 more IOP (PhyDev Read IOP Count) with NCQ for both the AVI and "multiple files" cases - and, moreover, there is roughly about 3 MB more read data transferred (PhyDev Read Data Xfer) in both cases.

    Furthermore, this additional read I/O operation activity and read data transferred is accomplished within less overall time (PhyDev Read Time Total) with NCQ - although writes are a different story in the "multiple files" case (and might be some read-to-write tradeoff there).

    So it seems that there is some factor/entity that is instigating the additional read IOPs and data transfer amounts using NCQ for both cases.

    I'm hesitant to speculate about what this factor/entity might be; rather, further configuration of the hIOmon software could help shed some light via additional metrics/observations.

  25. #325
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Hi overthere. Thanks for the feedback

    I have tried to follow the user guide to get the Raw Device Extended Feature Option to work, but somehow I must be missing a step. Will try again later.
    For now here is the same transfer (run separately) as captured by the Presentation Client. There is still a 700 IOP & 3MB difference on the PhyDev.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	AVI NCQ Disabled.png 
Views:	185 
Size:	162.8 KB 
ID:	113042   Click image for larger version. 

Name:	AVI NCQ Enabled.png 
Views:	185 
Size:	163.2 KB 
ID:	113045  
    Last edited by Ao1; 03-20-2011 at 07:31 AM.

Page 13 of 16 FirstFirst ... 310111213141516 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •