Page 3 of 16 FirstFirst 12345613 ... LastLast
Results 51 to 75 of 376

Thread: hIOmon SSD Performance Monitor - Understanding desktop usage patterns.

  1. #51
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Computurd View Post
    are the QD going higher on the ssd simply because it is reading faster, and can handle it better?
    This is what I am starting to suspect. Maybe high peak QD's are occuring because the SSD is dealing with requests so quickly. Maybe that is why the average QD stays around the same.

    Quote Originally Posted by Computurd View Post

    photoshop test:
    max data transferred HDD; 283,496,448
    max data transferred SSD; 778,919,936

    but you ran the same test on them? why would it write 2.7 times more data to the SSD for the same size file?

    for the modern warfare 2 the results are much the same...tremendously more amount written to the SSD with the exact same usage?

    seems like all of the write comparisons across the board follow this same pattern here. your read numbers for amount transferred look very close though. are you sure that you cleared the test before you ran them again on ssd?
    I picked this up as well. Maybe I just had Photoshop open longer and it auto saved when I was monitoring the SSD. For MW2 I loaded a level, played the level, let the next level load and then closed it. I did the same on HDD and SSD so I have no idea why writes were so much higher.

    Later today I am going to install a game on both HDD and SSD. That way I will know for sure that I am writting the exact amount of data to both HDD and SSD.

    The monitoring is recording per process, but it seems TRIM instructions are being sent out all the time by the OS and maybe that is part of the reason that SSD in general is seeing a lot more writes.

    The clear conclusion for me at this stage is that the vast majority of IOP read/ writes are done in less than a ms with SSD, which is obviously why a SSD feels so snappy and I'm not sure if raid could improve that. What I don't know is if max MB/s speeds would increase in a raid set up.

    Comp, can you run a game on a raid array and show the results as I have shown on the last couple of posts?
    Last edited by Ao1; 10-25-2010 at 01:38 AM.

  2. #52
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is what happens when I install Bad Company 2. First I reset hIOmon, rebooted and then I installed the game.

    It's not that straightforward as it generated multiple exe files.


    Last edited by Ao1; 10-25-2010 at 05:49 AM.

  3. #53
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    From The SSDPerfAnalysisFS Filter using HDD. This is a summary of the game install, patch update and general use whilst waiting for the patch to download and install.

    Now to reset for the game load comparison.


  4. #54
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is what happens when I play Bad Company 2 for the first time. I waited for the intro to play out and then immediately proceeded to get myself killed. Waited for the reload and then closed the program. It looks like this game does small incremental loads. Maybe later on in the game the loads are more significant.


  5. #55
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Boy glad to be back on SSD. That HDD sucked. Anyway that is it now for HDD vs SSD comparison, although I can check anything out if of interest to anyone.

    Next up I look at the bootlog, but what I really want to see is some results from anyone with an SSD raid array. In particular MB/s max times for game and application loading. Clearly my MB/s max times are well below my drives capabilities so it would be interesting to see if a raid array made much difference.

  6. #56
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    A quick word of caution when monitoring I/O operations at either the physical volume level or the physical device level as regards the process-related metrics:

    As previously noted, the hIOmon software can (concurrently) capture I/O operation performance metrics at three distinct levels within the I/O stack within the operating system:

    1) The file-level (logical drive/disk)
    2) The physical volume level, and
    3) The physical device level (which is similar to the "PhysicalDisk" level in the Windows Performance/System Monitor).

    It is important to note that I/O operation performance metrics at the file-level are observed by the hIOmon I/O Monitor within the context of the particular process that initiated the respective I/O operation. Consequently, the process-related metrics (e.g., the "process name") associated with the other respective metrics do reflect what the particular process did/experienced in terms of I/O activity.

    However, it is another story further down at the physical volume and physical device levels. In the case of I/O operations at either of these levels within the operating system, the hIOmon I/O Monitor observes the I/O operations within an essentially indeterminate context.

    That is, the "process/thread" currently running at the time during which the hIOmon I/O Monitor observes the I/O operation could be for any process (and not necessarily for the particular process that initiated the "originating" I/O operation up at the file-level).

    What this all means is that if you only configure the hIOmon software to monitor I/O operations at either the physical volume and/or physical device levels, then you need to be cautious about which particular process is identified with a respective set of I/O operation metrics.

    Now in these cases the metrics reported do accurately reflect the particular process that was "running" when the hIOmon I/O Monitor observed the respective I/O operation (but again, this process is not necessarily the one that initiated/originated the I/O operation).

    So overall then, if you are interested in capturing I/O operation performance metrics related to specific processes in general, then you need to collect I/O operation performance metrics at the file-level (so that the hIOmon I/O Monitor can initially observe the I/O operations within the context of the particular process that initiated the I/O operations).

    Note too that the hIOmon software also provides a "Physical Device Extended Metrics" option, whereby if you have configured the hIOmon software to capture such metrics and to concurrently monitor also at the physical volume and/or physical device levels, then it can automatically correlate the I/O operations performed at these levels with those at the upper file-level. Basically this enables you to associate the physical volume/device I/O operations with their respective process (which again, cannot be done - in general - by monitoring only at the physical volume and/or physical device levels as noted above).

    The reason I said "in general" is that there are applications that essentially bypass performing their I/O operations at the file-level, but instead initiate them directly at either the physical volume or physical device level within the operating system - but this is another whole story.

    Hopefully the above is clear.

    And perhaps another key take-away here is that the "flow" of I/O operations through the I/O stack of the operating system is not as "simple" as maybe some presume.

  7. #57
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    ^ thanks for the clarification and continued guidance. I can understand what you are saying about the file-level importance to get the right metrics, but hopefully there is something to learn from what I have done so far. There are so many ways to do things and so many things to monitor it's a bit overwhelming.

    Here I post the device top ten devices using SSD. The summary includes what I monitored above plus an install of Left for Dead 2. Once Left 4 Dead was installed I played an mp3 with WMP, a dvd with Power DVD, ran a quick scan with MS Essentials, played a track on Tractor - all at the same time, whilst working on a word document and downloading a huge patch file for left 4 Dead 2. Way OTT to what I would normally run at the same time.



  8. #58
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    OK here is the same as above with HDD. I noticed the CPU was maxing out quite a lot when I ran all the programs. I didn't check that with the SSD but I'd guess it was the same.

    So what can I conclude? It's impossible to use the SSD to its max sequential read speeds with the programs I used? Maybe faster sequential writes would not go amiss? I'm guessing if most IOPS are fast (less than a ms) there would not be much benefit if they got even faster?



  9. #59
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Ao1 View Post
    ^ thanks for the clarification and continued guidance. I can understand what you are saying about the file-level importance to get the right metrics, but hopefully there is something to learn from what I have done so far. There are so many ways to do things and so many things to monitor it's a bit overwhelming.
    Happy to be of help.

    Looking at what you have done so far, you have shown some interesting data points.

    These observations can help elicit a discussion of which particular metrics along with which specific scenarios are perhaps of primary interest.

    In addition, it might be good to revisit Computurd's comment in post #3 of this thread (i.e., his comment: "so lets talk methodology here...how are we going to go about it? what will be the baseline?").

    Regarding the "overwhelming" part of your comment, again there is an inherent, underlying complexity as to how I/O operations are actually handled at the "host" end of the cable (i.e., from the application down through the operating system).

    The hIOmon software makes these dynamics much more transparent to folks - for better or worse!

    In any case, one approach to consider is to formulate specific questions for which the hIOmon software can make available empirical metrics, and given that this is often an iterative process (especially as the "peeling of the onion" unfolds).

    Lastly, a brief explanation of the "<DASD>" stuff that appears within your prior screen shots (in case some folks are curious).

    The hIOmon software provides the option to collect summary metrics upon an individual, specific device basis (whether it be a logical drive/device, a physical volume, or a physical device within the operating system).

    The "device summary" metrics reflect the overall aggregate of all summarized I/O operation metrics for I/O operations that were monitored by the hIOmon I/O Monitor and which were directed to the respective device.

    In the case of a Device Summary for a logical drive/device (e.g., the "C:" drive), the "device summary" metrics reflect the overall aggregate of all summarized I/O operation metrics for all files (and only those files) that were monitored by the hIOmon I/O Monitor and which reside upon the respective device.

    This approach enables you to obtain an overall picture of I/O activity to a particular logical device limited to only those particular files (and optionally processes) that are of specific interest to you.

    Now I/O operation activity at the logical drive/device level is normally directed towards a specific file/path.

    However, in those cases where a specific file/path name is not associated with the I/O operation, a "pseudo file name" of "<DASD>" is reported (since there was no actual file/path specified as part of the I/O operation). This can occur when an I/O operation for a device that is being monitored is being performed to access filesystem-related data structures (meta-data) present upon the device (although more recent versions of Windows have begun to provide explicit "file names", e.g. "$Mft", for these meta-data files).

    In addition, “<DASD>\VMM" is displayed when the system flags associated with the I/O operation indicate involvement by the system “Virtual Memory Manager (VMM)”.

    Two last points:

    1) In the case of physical volumes and physical devices, the I/O operations are directed to blocks/sectors (i.e., Logical Block Addresses) and not offsets within files (since at these levels the I/O operations are directed to specific blocks/sectors residing upon the respective volume/device). Consequently, only the "pseudo file names" are reported within the metrics collected by the hIOmon software (and mainly so that one can differentiate between I/O operations involving the VMM and those that do not).

    2) The summary metrics for the device proper is, as noted above, the overall aggregrate for all of the associated files (including pseudo files).

    And if you are wondering why the read max QD for DR0 is 21 in your prior screenshots, but the highest read max QD for an associated (pseudo) file is only 20 (i.e., for “<DASD>"), this is because at the time that read max QD for “<DASD>\VMM" was 20, there was also presumably one I/O operation queued for “<DASD>\VMM" - thus bringing the aggregated total to 21 for the "DR0" device overall. (This could be substantiated if an I/O trace were also collected).

  10. #60
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I got excited by what hIOmon could do so admittedly I rushed in a bit A bit of methodology is now in order

    First here are the specs for my SSD:

    • Sustained Sequential Read: up to 250MB/s
    • Sustained Sequential Write: up to 100MB/s
    • Read latency •65 microseconds
    • Write latency •85 microseconds
    • Random 4KB Reads: up to 35,000 IOPS
    • Random 4KB Writes: 160GB X25/X18-M: up to 8,600 IOPS

    What would I really like to know? Primarily I'm trying to understanding how well my SSD performs in relationship to my typical desktop use. Is it adequate or could it be faster. What needs to be faster? This raises 4 questions for me, although maybe they are not the right questions.

    1. How much am I utilising the available max sequential read/ write speeds overall?
    (From what I've seen so far it would seem that read speeds are underutilised but, surprisingly for me, maybe faster sequential writes would speed things up (?) If I've got that wrong what is the best way to find that out?)

    2. How much am I utilising the max sequential read speeds for loading a game? What is limiting the max MB/s read speed?

    3. Would an increase in IOPS capability give me any advantage?

    4. Would an SSD raid 0 array improve responsiveness and make my apps work faster in comparison to a single SSD?

    Building a SSD raid 0 array is possible, but it's going to be loads of work as the other drive I have is only 80GB so I could not use an image and therefore I'd have to install everything from scratch. (Not to mention my wife is currently using it).

    Somehow I don't think that it would be necessary anyway with hIOmon
    Last edited by Ao1; 10-25-2010 at 01:04 PM.

  11. #61
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Ao1 View Post
    I got excited by what hIOmon could do so admittedly I rushed in a bit A bit of methodology is now in order
    Good to hear of your enthusiasm!

    Certainly various benchmark tools can be used to explore the limits of your system.

    As to the extent to which your particular applications, overall system, and normal, everyday usage actually reach (or require) those limits is, of course, another matter.

    But capturing empirical metrics (that reflect your actual usage) can be a important step in establishing, for example, a baseline understanding of your system and usage - as you have begun to do.

    And the quantitative nature of the metrics can be valuable in evaluating and determining, for instance, the actual merits of system/application changes and improvements.

    Your four questions are good ones. But I defer to the experts in this forum to hear their suggestions and thoughts as to the possible answers.

  12. #62
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by overthere View Post

    As to the extent to which your particular applications, overall system, and normal, everyday usage actually reach (or require) those limits is, of course, another matter.
    That in a nut shell is where my interest lies.

    There are some very knowledgeable people on the storage forum so I'm hoping they can give some input at some stage.

    hIOmonn has showed me that I am well out my depth, but I will try to work through it regardless, although I now realise I don't even understand the basics concepts of storage. This can be summarised well in my query below.

    If I look at the data transfer speeds per single IOP read I get 260MB/s.



    Here I get a data transfer speed for a single IOP write of 251MB/s.



    Here is a summary of the top ten processes





    How could I be getting a those speeds for a single IOP?

    I don't understand the relationship between the single IOP MB/s speeds and the speeds below, which I assume are sequential speeds?








    Today I am using the Presentation Client to monitor my normal use and I'll post a screen shot of that later.
    Last edited by Ao1; 10-26-2010 at 06:18 AM.

  13. #63
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is a Device I/O summary from the Presentation Client. Here I use my PC as I would do normally.


  14. #64
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Ao1 View Post
    If I look at the data transfer speeds per single IOP read I get 260MB/s.

    Here I get a data transfer speed for a single IOP write of 251MB/s.

    How could I be getting a those speeds for a single IOP?

    I don't understand the relationship between the single IOP MB/s speeds and the speeds below, which I assume are sequential speeds?
    Here is a short explanation for each of these two MB/s metric types:

    Read Single IOP Maximum MB/s:

    This metric basically reflects the highest data transfer rate observed by the hIOmon I/O Monitor for a single read I/O operation.

    This MB/s rate value is the amount of data transferred by the single read I/O operation divided by the amount of time that it took to perform the operation (i.e., the time duration of the I/O operation) as observed by the hIOmon I/O Monitor.

    Read Maximum MB/s:

    This metric represents the maximum MB/s rate for read I/O operations as detected by the hIOmon I/O Monitor during a one-second interval since the hIOmon I/O Monitor began collecting I/O operation performance data for the file (or device or process).

    This metric reflects the combined total amount of data transferred by all of the read I/O operations during the one-second interval.

    There is perhaps a subtle distinction between these two metric types.

    The first (i.e., the Read Single IOP Maximum MB/s) is based upon the time duration of the single I/O operation itself, whereas the second metric is based upon the total amount of data transferred (by all of the respective I/O operations) over the course of a one-second time interval.

    In terms of an example, say that a read I/O operation transferred one byte within one millisecond.

    By the first metric type noted above, this would represent a "one thousand bytes per second" data transfer rate.

    But such a "single IOP" MB/s rate is not to say that one thousand bytes were transferred during the course of a one second interval.

    On the other hand, the second metric type does reflect the actual maximum amount of data transferred during a one-second interval.

  15. #65
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Dorset, UK
    Posts
    439
    I guess the conclusions to your experiments will be fascinating for all of us, Ao1.

    It did strike me, though, that this software might be able to give us the answer to an age-old question that's dogged this forum and many others before - the need for and actual OS usage of pagefiles in a system with large installed RAM. It's not unrelated, since those of us with SSDs are are all keen to reduce writes where possible, even if they are ideally suited for pagefiles as MS claims.

    Given the entrenched views on this, anyone interested in doing that research would be something of a hero.
    Quote Originally Posted by Particle View Post
    Software patents are mostly fail wrapped in fail sprinkled with fail and sautéed in a light fail sauce.

  16. #66
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I had to give it a try

    This is a small sample, a bit of surfing using Chrome, not much more than that.

    himon_testing.PNG

    Interesting results.

    e.g. random vs sequential io

    himon_testing_2.PNG
    -
    Hardware:

  17. #67
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    @Anvil is that with the 9260 or 9211 or areca? you have all the goodies
    also, which drives, etc?
    i noticed your write qd is much lower than A01 has been getting, which leads me to believe you are on a controller with cache, ruling out the 9211...but then again with an array the writes will be much better even without cache

    @A01, yes this has raised more questions than it has answered so far, but good questions nonetheless...this exercise is definitely going to broaden our horizons a bit
    my plan is taking shape in my mind:

    i plan on creating one system image, then testing that image on a single vertex, then a array, and also a caviar black. win7 x64 of course.

    there will be some timing done of game loads, and various apps for comparison.

    all tests will be run at the same clock. i think 4.4 outta be about right. i feel that many of the lack of superior load times/etc are due to systems not being fast enough to hang with it, so i want to test with a high baseline clock.
    What i am thinking of is not 'load testing' in its entirety though. as a matter of fact i am less interested in the loading metrics than most would be.
    i am going to run a 'suite' of programs, and not begin monitoring until they are all loaded, in an effort to see the load of the system 'in flight' and compare the three.

    load testing: games to test for load times will be my usual, crysis and left 4 dead. i also plan to run a metric while those games are loading the same levels on each of the three setups.

    in-flight performance testing: my standard gaming config: two instances of firefox, video chat on MSN, game, and various desktop gadgets, also running AIDA64 for monitoring, FRAPS for fps, and kaspersky antivirus.
    any combination of programs that you would like to see, let me know. i have technet so i can get all MS stuff, and i also have PS, anything else i can get off the 'Binz just as well as anyone else

    i can get any os i need off technet so i might run a vista as well, to see if there are significant differences in QD handling with a single SSD, if i see any significant difference i will expand upon that testing with an ssd array. what, if any, major difference with SSD and vista/win7 intrigues me...to save time though if there is no difference i will abandon that.

    sometimes with so-called gen 1 ssd (specifically vertex) the win7 environment will not detect the ssd until you run winsat disk, either via cmd line or via WEI> there is a means of telling if it has detected the SSD, you can check in the defrag schedule, and if it hasnt detected the SSD it will be in the schedule list. if win7 HAS detected it, the SSD will be removed from that list. hopefully i can replicate this issue and get the OS not to recognize the single ssd. this would speak volumes to run the same tests, with and without the OS being 'optimized' for it. maybe it can help to quantify the actual amount of filesystem optimizations, if any.

    im not as interested in monitoring processes/programs individually as much as i am interested in the device usage as a whole. i am not programming here, i just wanna see the device level metrics. however, i will make a standard list of metrics that will be captured, so let me know of any you would like included. i will make a list up when i get set up to post before testing.

    im sure i will add to this before i begin in roughly seven days any and all opinions/thoughts of tests to be added will be appreciated. this is gonna take some time and setup, so i want to do it right the first time
    bored waiting on intel G3 anyway so lets do this.
    Last edited by Computurd; 10-26-2010 at 06:05 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  18. #68
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    It's the good old ICH running 3R0 Intel X25-M 160GB G2

    I'll try to do some iometer/sqlio tests later tonight, so far i'm just playing with the different config options
    -
    Hardware:

  19. #69
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Guys,

    I think we need to very careful about what we are doing or we will end up with a lot of data that does not seem to make sense.

    • First off we need to establish the metrics that are of interest. Right there is the first big challenge.
    • Next we need to be able to compare the exact same workloads and that is also a big challenge.
    • We also need to be careful about how we monitor, as per post #56 in response to what I discovered in post #55.
    • How do we establish if the overall system is impacting the storage performance? Already I have seen my CPU max out when I multitasked, which (I assume) must have impacted storage performance.

    Looking at Anvil's results in post #66 his average QD is 1,514 with just a bit of web browsing and his maximum response times seem quite high. Clearly there are going to be some large variations in certain metrics between a single SSD, onboard SSD raid and hard raid SSD, so we need to understand what these are going to be and more importantly what to look for in terms of the performance output. Here I am struggling as I am out of my depth.

    With regards to IanB's post #65 that is something that I think hIOmon could do quite easily. In the default filter it monitors the page file, so I think it could isolate and monitor the page file activities and then access the overall system impact with the page file either on or off. Again we need a clear methodology on how to go about doing that to save wasting time.

    So....first off what are the metrics that are of interest and why?

  20. #70
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Ao1

    I agree, I need to play a bit with the metrics to try to fully understand and possibly limit the IO to specific files. e.g. an iometer test file.

    I'm not comfortable yet with the metrics, I need to know what is actual IO, meaning DRn vs DASD vs VMM

    The only way of getting the same workload would be using benchmarks like iometer, all other approaches would lead to non comparable data.
    (we all have different setups, drivers, background tasks, ...)

    I think we need to play a bit to get to know what to look for, QD, access times are metrics that are of general interest imho.

    I'll try comparing last nights session as to get results without using the pagefile.
    -
    Hardware:

  21. #71
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Anvil, in post #23 Overthere explains that it is possible to use a benchmark as a plug in. This would give a fixed data workload. I was hoping that I could monior my normal use and come to some conclusions about how well my SSD was utilised and if any improvements in performance would see real life performance gains, but on the other hand at least a benchmark provides a comparable workload that we could all work with.

  22. #72
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Ao1,
    One can't run a benchmark as a plug-in, the point is that one can have a clean start/stop for the metrics, which is very important. (which I was looking for)

    I'm currently thinking in line with CT, meaning that I'll first try monitoring the system as a whole and then later monitor files, i.e. VMWare files.
    (I'd love to know how IO looks like for VM's, random Vs sequential in particular)
    I've got separate "devices" for VM's so that shouldn't be too hard to monitor.
    -
    Hardware:

  23. #73
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Anvil View Post
    I'm not comfortable yet with the metrics, I need to know what is actual IO, meaning DRn vs DASD vs VMM
    There is a brief explanation of the "<DASD>" stuff back on post #59.

    To hopefully be clearer, basically:

    1) The hIOmon software generates "pseudo file names" for those I/O operations that are observed by the hIOmon I/O Monitor for which there was no file name/path specified (or applicable).

    2) Two different "pseudo file names" are used: "<DASD>" and "<DASD>\VMM"

    3) "<DASD>\VMM" represents collectively those I/O operations whose associated system flags indicated involvement by the Windows system "Virtual Memory Manager (VMM)".

    4) "<DASD>" represents collectively all other I/O operations (i.e., other than the "<DASD>\VMM" I/O operations) for which there was no file name/path specified (or applicable).

    5) All I/O operations performed at either the physical volume or physical device level specify basically a block/sector address (rather than a file name/path) upon the volume/device to which the I/O operation is directed.

    Consequently, all I/O operations to a physical volume/device will be relegated by the hIOmon software to either the "<DASD>" or the "<DASD>\VMM" pseudo file name.

    6) The DRn represents the "Device summary" metrics for the associated physical device.

    The "device summary" metrics reflect the overall aggregate of all summarized I/O operation metrics for I/O operations that were monitored by the hIOmon I/O Monitor and which were directed to the respective device/volume.

    Accordingly, the "device summary" metrics for DRn reflect the overall aggregate of both of the associated pseudo files (i.e., "<DASD>" and "<DASD>\VMM").

    Sorry for any confusion.
    Last edited by overthere; 10-27-2010 at 09:27 AM.

  24. #74
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    We'll get there in the end

    I've got another X25-M 160GB arriving tomorrow so I will soon be able to post some results in raid 0. Can't wait.

  25. #75
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Ao1 View Post
    We'll get there in the end

    I've got another X25-M 160GB arriving tomorrow so I will soon be able to post some results in raid 0. Can't wait.
    It's often an iterative process, and I am happy to answer questions about the hIOmon software as they arise.

    And looking forward to your upcoming results

Page 3 of 16 FirstFirst 12345613 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •