In post #221 Anvil provided a hIOmon Presentation Client screenshot that showed various metrics collected during an ATTO Disk Benchmark run upon his 9260 array. Here are some considerations and further observations that might be of interest:
Consideration A: Certain metrics shown within the hIOmon Presentation Client display reflect an average calculated for either the latest Observation period (or an overall average calculated for the time period since the start of the first Observation period up until the end of the latest Observation period).
For example, the Read IOPS value of 3552.4 (as shown within the screenshot) is simply the total Read I/O operation count (1 066 249) divided by the time duration of the latest Observation period (5 minutes and 0.1453561 seconds, i.e., 300.1453561 seconds), which was the only Observation period (so far).
So basically what the Read IOPS represents is the average IOPS rate over the course of the Observation period (i.e., an average of 3552 read I/O operations per second during the 5 minute Observation period).
Of course, the ATTO benchmark performed read I/O operations at varying transfer sizes (and accordingly varying IOPS rates) during the overall benchmark test period. In any case, the hIOmon Presentation Client calculates/reports the average IOPS rate based upon the time duration of the particular Observation period.
One might also have noticed that the last read I/O operation completed approximately 1 minute and 6 seconds before the end of the 5 minute Observation period (and so if this 1 minute and 6 second time duration, within which no read I/O operations were performed, were excluded from the average read IOPS calculation, then the calculated average read IOPS value would be greater than the 3552.4 IOPS value).
The hIOmon Presentation Client calculates the "average" IOPS value in this manner so that users can perform, for instance, various capacity planning and performance analysis studies over extended time frames (e.g., to identify those particular time periods of intense and/or low I/O operation activity.
Please note that the length of an "Observation period" is essentially the user-specified periodic time interval for which the hIOmon software can be configured to collect the summary I/O operation performance metrics for display, export, etc.
As an alternative to collecting summary I/O operation performance metrics upon a user-specified periodic basis, the hIOmon software can also be configured to collect the summary metrics for offload either when the respective file is closed and/or upon an "alert" basis (i.e., when a user-specified threshold - for instance, some maximum read I/O operation response time value, some total amount of write data transferred, etc. - has been detected/observed by the hIOmon I/O Monitor for the respective file).
Consideration B: The hIOmon "SSD I/O Performance Analysis" script option configures the hIOmon software so that I/O operation activity performed by any process will be included within the summary I/O operation performance metrics that are collected.
Consequently, the collected summary metrics displayed by the hIOmon Presentation Client (as shown within the screenshot) might include I/O operations that were performed by processes other than the ATTO benchmark program. These processes could include system processes that update various file system metadata (which can occur since the read and write I/O operation activity performed by the ATTO benchmark tool is directed to a file).
In any case, the actual amount of I/O operation activity performed by such processes is likely negligible (especially since the bulk of the I/O operation activity was due to the ATTO benchmark program and granted that there was no other appreciable I/O operation activity performed by other processes, such as an antivirus program, directed to the logical drive that was the target of the ATTO benchmark program). However, any such I/O operation activity that is extraneous to that performed directly by ATTO can skew (however slightly) the reported metrics from those expected to be attributable to the ATTO program alone.
Consideration C: I configured the hIOmon software to take a closer (although still very cursory) look at how the ATTO Disk Benchmark actually performs its "benchmarking" I/O activity.
One thing that I first noticed is that ATTO begins by first generating (i.e., writing) its "benchtst.$$$" test file. If the selected value of the ATTO "Total Length" option is 2GB (as in the case of Anvil's screenshot), then ATTO will generate/write a 2GB test file using 256 write I/O operations each with a data transfer length of 8388608 bytes, which happens to be the largest "write I/O operation" data transfer size shown with the hIOmon Presentation Client screenshot and also the largest selectable ATTO "Transfer size" option value.
Incidentally, it appears (based upon the I/O operation performance metrics that I collected using hIOmon) that if you select an ATTO "Total Length" option value smaller than 8MB (technically 8MiB), then ATTO will nevertheless write out an initial test file that is 8 MiB in size.
Anyway, some of the key points here are: (1) ATTO always starts out by generating/writing its test file - so there is some initial write I/O operation activity performed by ATTO before it actually begins its various "transfer size" I/O operation activity, which is the basis of its "advertised" benchmarking; and (2) the write I/O operation activity by ATTO that occurs when initially generating its test file is included within the "Write I/O operation" metrics collected/shown within Anvil's hIOmon Presentation Client screenshot.
Consideration D: Based once again upon my collection of I/O operation performance metrics using hIOmon, I noticed that ATTO apparently performs varying amounts of read (and write) I/O operations. The difference between successive ATTO runs was sometimes on the order of thousands (e.g., between 10 000 and 20 000 overall total read I/O operation count difference between successive ATTO runs). Perhaps this difference is due to some system-dependent phenomenon. In any case, please note that I configured the hIOmon software to collect summary metrics for the "benchtst.$$$" file only and only for the ATTO "bench32.exe" process (using ATTO Disk Benchmark version 2.46).
This read and write I/O operation variance is disconcerting to me (at least based upon my working with folks defining benchmarking standards). To my understanding, one of the tenets of sound benchmark design is the "consistent repeatability of the stimulus". In other words, the benchmarking tool should perform the same prescribed activity each time the benchmark tool is run using the same configuration options (otherwise it is difficult to perform, for example, an "apples-to-apples" comparison).
Consideration E: The ATTO benchmarking tool does not necessarily read (nor write) the entire "Total Length" for (at least) the 512 byte transfer size. This in evident in Anvil's hIOmon Presentation Client screenshot. For a transfer size of 0.5 KiB and a "Total Length" of 2 GiB, a total of 4 194 304 sequential read I/O operations (each with a data transfer size of 512 bytes) would be required to read the entire 2 GiB test file.
As shown in Anvil's screenshots, ATTO was configured to include a transfer size of 0.5 KiB. The hIOmon Presentation Client shows the smallest read data transfer size of 512 (along with a maximum read data transfer size of 8 388 608 - which confirms that the ATTO "Transfer size" span that Anvil specified was in fact performed). However, the hIOmon Presentation Client shows an overall total of 1 066 249 read I/O operations were performed; as noted above, 4 194 304 sequential read I/O operations would be required to read the entire 2 GiB test file for a 512 bytes data transfer size alone. Anvil might have been pointing this out with his highlighting of portions of the hIOmon Presentation Client screenshot.
Bookmarks