I'd agree that Vista may need a different approach to pagefile monitoring and setup, my comments are based on older versions of Windows that don't have this Superfetch behaviour.
I'd strongly disagree here. The reason is the API (Application Programming Interface) for Windows.As for disk configurations... here's what I would suggest... in the interest of parallelism, you want 3 different physical drives...
1) OS and Apps. The OS is used to boot, the apps are used after boot. No sense putting these on different drives since they aren't accessed at the same time in general.
When you load an app, it doesn't contain all the code for running the program. Much of the app is calling DLLs (Dynamic Link Libraries) and other external code modules to perfom I/O and user interface work, some provided by the application but most are provided by Windows itself. This is why all Windows apps have much the same "look and feel", they use the same modules to provide the buttons, checkboxes, scrollbars, cursor behaviour etc.. The surface appearance of the app is only a very very small part of the API.
If you disassemble an app, you find near the end a list of all the API and external DLL modules it needs to function. Therefore, when the app is loaded into memory, the first thing it must do is signal to the OS which of these it needs. Obviously some basic I/O DLLs are so common that they will always be resident in memory (kernel.DLL for instance), but others may be a little more esoteric and have to be loaded in from your C:\Windows\System32 folder as required. And remember, there may be proprietary DLLs provided as part of the app setup, which live in the app's own folder.
So the upshot is that loading an app means that before it can run there will be multiple file reads from both its own folder and the Windows\System32 folder and probably other places as well as it loads all the necessary code modules. And since modular programming is generally the best way to manage a large app, the larger the app the more code will be modular and external, so the more cross-loading there will be, which is why loading a large app is so slow. To really speed this up, you DO need the app and OS on different disks.
Then you have the pagefile in play, as some of that data being requested to load the app may have been dumped from RAM to pagefile earlier in the session, so it will be reloaded from THAT disk area. Or the OS decides it needs to dump current memory contents to pagefile to make room for the loading app, so it will be writing there instead. And all this is happening concurrently with whatever your OTHER apps are doing - for instance, a Usenet, torrent or FTP app may be doing its own file reading/writing separately in the background using yet another disk area.
Here's an ideal schema for partitioning a system to achieve maximum drive usage and response speeds. The drives here can be RAIDed sets, but the separation of function is best maintained by using three drives like this. The arrows show which disk partitions might be used concurrently. Obviously the D/E/F data areas can be subpartitioned as required for better organisation. In this scheme I'm assuming that some apps may be part of the OS installation or installed into the OS partition (eg. Outlook Express) so data (eg. email) used by that app (here shown as E) should be on a different disk to the OS. The least accessed data should be on the same disk as the OS, any frequently accessed data (torrents/usenet download area etc.) on a completely separate disk. The order of preferred disk speeds is obviously fastest to slowest, from top to bottom.
![]()




Reply With Quote

Bookmarks