Quote Originally Posted by Ao1 View Post
Looking at the individual processes I ran none of them could utilise the max sequential read speed capability of my SSD. When running multiple processes together to try and reach the maximum sequential read speed capability of my SSD the CPU would max out.
This does underscore the things we need to think about when looking at SSD marketing numbers for our relatively undemanding desktop systems.

10,000 IOPs or more may scream "better!" but if the highest IOPs actually demanded by the system is in the hundreds there's no practical gain from that. Sucking in huge quantities of data faster (higher streaming speed) from disks is better, surely? But the processor then has to do something with that vast quantity of data, which places the bottleneck back on processing speed beyond a certain point.

For server situations, high IOPs are significant since you have multiple users making multiple requests and higher streaming speeds may mean the individual requests can complete faster. That's obviously an advantage.

But the bottom line you've demonstrated, if my skim of the results and your analysis is correct, is that the read/write latency is the only significant factor affecting speed for most single-user desktop systems that do not have specialised requirements. That should be the obvious metric we need to compare between disks. And if you think about it... that's what we've always done with storage! The mechanical disk with a lower track seek has always had the overall speed advantage.

And yes, the pagefile result is fascinating! I don't think anyone would have expected that turning it off competely affects speed. As a guess, maybe Windows is dumping some reads speculatively into pagefile simply because it's there, wasting time if the physical memory in the system is enough to handle the total VM requests.