Interesting. The Momentus XT was faster than the VelociRaptor, even on the first run.
Interesting. The Momentus XT was faster than the VelociRaptor, even on the first run.
There is a run 0 where I just boot the OS.
It's needed as VMWare reacts to the VM being "moved" to a new drive.
The VR isn't what it used to be, the new 600GB edition is certainly faster than the old one. (up to 20-30% on some tasks)
-
Hardware:
45 days old OS
So in this instance the OS issued a single command to the SSD to wipe ~ 1.5 gigabytes of “deleted” data? I assume the SSD controller then waits for a quiet period and then deletes the data when it will have no/ less impact.
[Tiltevros] 16 Cores
I have created a separate thread for Winbootinfo otherwise this thread will end up way off topic.
http://www.xtremesystems.org/forums/...d.php?t=261924
Last edited by Ao1; 11-08-2010 at 08:06 AM.
The WD SiliconEdge is no slouch
It doesn't scale with QD, and so it's on par with HDDs in that regard, still it is a great drive imho.
vm_boot_build_shutdown_ICH_WD_SILICONEDGE_BLUE_64_1.PNG
New chart...
hiomon_summary_5.PNG
Last edited by Anvil; 11-08-2010 at 12:28 PM.
-
Hardware:
Yes, and that is certainly one way that the SSD controller could handle the "backend" processing of the TRIM commands from the OS.
By "backend", I mean the actual manner in which (and when) the SSD controller performs various flash management tasks (e.g., wear-leveling) based upon the DSRs that it has received from the OS.
As you know, essentially the TRIM commands currently provide a way for the OS to notify the SSD that a specific set of (logical) blocks/sectors upon the SSD are considered by the file system to be "deleted data blocks" (that is, sectors upon the SSD that the file system considers to no longer contain valid data).
But as to what exactly the SSD does with these "trimmed" blocks/sectors (and when), this is, I believe, basically "vendor-specific" at the current time.
This "old-timer" is still doing great
(it was formatted prior to the test and so it is not in steady state)
vm_boot_build_shutdown_ICH_G2_160GB_1.PNG
-
Hardware:
2R0 Intel X25-M 160GB G2
Hmmm
vm_boot_build_shutdown_ICH_2R0_G2_160GB_2.PNG
hiomon_summary_6.PNG
Last edited by Anvil; 11-08-2010 at 04:40 PM.
-
Hardware:
Here I monitor Call of Duty Black Ops Single Player for 30 Minutes. I monitored the specific folder Call of Duty at a 1 minute interval. The first surprise is the amount of processes that were generated, which are summarised below. All contributed to numerous random and sequential read requests over the 30 minute duration to varying degrees. 64 processes generated 1,127 individual reads.
Games are not all about sequential reads, random read speeds seem to also be important.
Read speeds
IOP Count
Fast Read IOP count. Worst case percentage = 1%. Average 98%.
Queue Depth - Maximum Max = 5
![]()
Last edited by Ao1; 11-10-2010 at 02:57 AM. Reason: XLS log file added
Here I show around 1 ½ hours of activity on the page file with the size set automatically by Windows.
During the monitoring period I ran most of the typical programs I use all the time, including:
• Large and small Photoshop files (2MB to 2.3GB)
• Web browsing with 7 pages open. (IE9 and Chrome).
• Traktor Scratch
• WMP
• Checked emails and cleaned up the in box
• Black Ops MP
• Worked on an Word document
• Worked on an Excel document
On average Windows Task Manager recorded around 64 process running.
OS = Win 7/64 with 8GB RAM.
![]()
The figures tell the story.
You have 8GB of memory, the biggest file you opened in Photoshop was around a quarter of that, yet despite all that spare RAM the system saw fit to dump a piddly 280Mb of data to pagefile - just in case? But this data clearly wasn't needed again, since it only read back in 90Mb tops.
The pagefile activity you showed was totally redundant. The system wastes time using it if it is there, so it's simpler and more efficient to monitor your maximum Commit Charge for your working set of apps and data over time, then make sure your RAM is more than enough to cover that and turn the pagefile off instead.
Plus, anyone who follows the archaic Microsoft dogma of sizing the pagefile as 1.5 x RAM (with large amounts of installed RAM) is a fool who's simply wasting disk space.Just turn off the system error dump since you'll never use it if you do ever bluescreen.
EDIT: I admit, I ran out of memory the other day.But that was a fault with Media Player Classic buffering a large, damaged video when I tried to skip through it. I watched the Virtual Memory Commit Charge spiral out of control in Process Explorer, so it was a bug in Media Player or one of its components, not a fault in my choice of turning off pagefile, since likely it would have continued on to use whatever pagefile I'd provided too and hit the limit of that.
There will always be exceptions, but this doesn't change the obvious best practise that Ao1's example proves.
That is the only reason I can think that MS would still recommend using the pagefile. What I could see over 1 1/2 hours is exactly as MS describe for why you should use the pagefile with SSD. A predominance of small random reads and large sequential writes, which an SSD can handle very well. The bottom line however is that RAM can hangle it better and it saves data being pointlessly writtten to the SSD.
I’ve left the pagefile.sys script running so later today I will post the Excel summary sheet for anyone that is interested.
On a separate issue here is a summary of Anvil’s monitoring.
![]()
Last edited by Ao1; 11-11-2010 at 02:57 AM.
Attached is the pagefile log after 7 hours of use.
Here I monitor Call of Duty Black Ops Multi player.
![]()
Last edited by Ao1; 11-12-2010 at 06:14 AM.
Thanks for the chart(s) Ao1,
I've started testing on a single C300 256GB, looks like we might have a new champion
(not by much though)
I might try using the onboard Marvell controller (6Gb/s), should be an interesting compare to the ICH.
Last edited by Anvil; 11-13-2010 at 06:26 AM.
-
Hardware:
Last edited by Anvil; 11-13-2010 at 07:53 AM.
-
Hardware:
Anvil, here is a quick summary of the key performance metrics, including the C300. It’s easy to see where the G3 DRAM will come in handy.
![]()
Last edited by Ao1; 11-14-2010 at 12:55 AM. Reason: Updated Anvil's new data
Thanks again Ao1, great charts!
I've almost given up on getting the Marvell 6Gb/s onboard controller working and so I popped in the 9211 to get some 6Gb/s "results" using the C300
Max response times aren't that good but all in all it is the best combo, so far
vm_boot_build_shutdown_9211_C300_256_2.PNG
I'll try another driver for the Marvell controller, if that doesn't work out that is the end of testing the C300 as a single drive.
edit:
Finally found a driver for the 9128 that works, looks OK for a single drive.
vm_boot_build_shutdown_Marvell_9128_C300_256_2.PNG
Last edited by Anvil; 11-13-2010 at 01:12 PM.
-
Hardware:
Nice one Anvil.It’s a real shame that there is not a good SATA 3.0 controller out yet for the C300.
I’ve updated the chart above to include the latest runs. Marvell really hit the fast IOP count and added loads to the response times.
SteveRo, any chance of persuading you to do a PCMV run and posting the log file? It would be great to see the log file entry that PCMV generated.![]()
I had to try 3R0, the Intels are the first ones to try.
There's no doubt about it, nR0 really does improve performance.
vm_boot_build_shutdown_ICH_3R0_G2_160GB_2.PNG
hiomon_summary_8.PNG
Last edited by Anvil; 11-13-2010 at 04:02 PM.
-
Hardware:
Ao1
I see you have updated the charts.
I tried running 3R0 software raid on the 9211 and everything was OK with the exception of one thing.
HIOmon wouldn't accept the volume as a valid device.
Not a big issue but it would have been an interesting run.
-
Hardware:
There are a lot of variables for the pcmv score.
Raid-0 would have helped a lot.
Could you try running the hdd suite?
I might have a go with the pcmv on my 2R0 C300 64GB later today.
-
Hardware:
Here is the standard PCMark Suite test.
It does include booting and so it's not a "pure" pcmv run.
BOOT_AND_PCMV_2R0_C300_ICH.PNG
QD is somewhat high on reads, max responsetime is sky high for some reason.
-
Hardware:
Anvil, on the trial version of PCMV it is not possible to select the storage bench
To monitor folders I have attached a quick guide to help get up and running quickly. I set the monitoring period to 1 minute for PCMV, but it can be set to whatever you want. Depending on what you are monitoring however it will generate a huge amount of data entries due to the amout of processes that get generated when the app being monitored is executed.
Last edited by Ao1; 11-18-2010 at 12:36 PM.
Bookmarks