11.6TB.....Wear out 99%.
11.6TB.....Wear out 99%.
24TB. 88.
Ok this workload is much too light. All we are doing is sequentially writing to the SSD and slowly wearing out the NAND. 2TB per percent that I am seeing right now would mean 200TB to reach 1, or 5000 cycles on each NAND cell, which happens to exactly be the spec.
Anvil, how about adding some random writes, meaning, making changes within the generated files?
My 320 has the TRIM hang you speak of as well, but it only lasts 1.5-2 seconds. After this, the speed drops to about 39MB/s over the period of 30 seconds. During the next 30 seconds it slowly recovers to 42-43.
Finally
I can do that, although a lot of these writes are actually "random".
What I can do is to enable part of th benchmark and have a fixed 1-2TB file where random writes takes place at some interval.
Those writes would never be handled by TRIM as TRIM can only do cleansing when a file is "deleted".
edit:
The TRIM hang is more like One_Hertz's except for that it building up speed for a short while and within a few minutes it slowly drops to about 32-33MB/s. (it peaks at ~39MB/s)
Last edited by Anvil; 05-23-2011 at 06:44 AM.
-
Hardware:
Anvil can confirm (or deny), but I suspect the program is sending TRIM commands for a large number of files at once. Thousands?
For streaming video or audio, I don't think most people are likely to have thousands of files to delete at once. Or if they do, they are doing something wrong.
Yepp, I can confirm that we are talking about thousands of files, but, I'm pretty sure it's more related to the size (in GB) of the files rather than the number of files, could be both of course.
-
Hardware:
Someone should try streaming a video to a Vertex drive and then we would soon know. I can stream audio so I'll try that later. I doubt if it matters much what has been written, in terms of how many files were generated. I'd guess it's about how large the TRIM operation is, i.e. the total size of data to be TRIMMED.
Anvil, I had a look at the OCZ fw release notes and I see it is indeed documented. I was not aware of the beforehand.
Known open issues:
•TRIM command response times are longer than desired
EDIT: Anvil I'm guessing all the temp files are deleted at the same time, i.e at the end of a loop?
Last edited by Ao1; 05-23-2011 at 07:15 AM.
Closing in on 20TB
19_82_tb_host_writes.PNG
6,7 Million files
I'll update the 1st post with an updated graph a little later.
-
Hardware:
I can stream audio and set the file to split at 64MB, 650MB, 700MB or 2,048MB. (By using Tracktor).
I can also stream in any audio format. Wave eats up disk space. EDIT: but can be compressed to 10th the size by converting it to an mp3)
Going start working on it now, so we will soon see.
Both would be directly related to the # of LBAs.
In my case the 1.5 second delay was caused by deleting 1 single file. (a 1,5GB file)
Now how would that translate to a single 15GB file?, would it be 15 seconds?
The amount of data deleted by Ao1 is approximately 15GB.
I don't know, hIOmon lists the LBA range for TRIMs but I don't think it would for thousands of files.
-
Hardware:
@alfaunits
Here is a sample testfile.
-
Hardware:
Ao1 is correct regarding the "ResponseTime_Max_Control" shown within the hIOmon WMI Browser reflecting the maximum response time observed by hIOmon for a "TRIM" command (technically, a "Device Control" I/O operation that specified a "Manage Data Set Attributes (MDSA)" request for a "TRIM" action).
When the hIOmon "SSD I/O Performance Analysis Add-On" script configuration option is used, the hIOmon software is specifically and automatically configured to monitor control I/O operations for the specified physical volume/device(s); read and write I/O operations can also optionally be monitored by hIOmon.
Moreover, this monitoring of control I/O operations by hIOmon is limited to "MDSA" requests (consequently, other control I/O operations such as Flush Buffers are explicitly ignored by hIOmon).
So overall in this case, the various "control I/O operation" metrics captured by hIOmon reflect TRIM-related control I/O operations only. Similarly, the "control I/O operation" metrics shown within the hIOmon Presentation Client displays provided by Ao1 also reflect only TRIM-related control I/O operations.
^ this makes things very easy
Here are some results from 3 delete operations. Nothing else is running in the background.
What I notice; TRIM only occurs when the file is deleted from the Recycle bin. In all cases there is a delay of a few seconds before the TRM command is executed.
This tells me that when running the loop, as the new loop starts it is stopped by the TRIM command being executed a couple of seconds after the delete has occured.
- File folder = 612MB 179 files in 4 folders
- AVI = 697MB
- File folder = 6.83GB 635 files in 96 folders
hIOmon can collect I/O operation metrics that are automatically summarized as well as an I/O operation trace. Both of these options can be used separately (with no dependence upon each other) or concurrently.
Some of the "summarized" TRIM-related metrics captured by hIOmon are shown within Ao1's prior post #84 using the hIOmon WMI Browser.
Similarly, Ao1's post #125 shows a snippet from a hIOmon Presentation Client display that includes several control I/O operation metrics which are TRIM-related.
The hIOmon Presentation Client can be configured to display additional TRIM-related metrics as shown in Anvil's post #148 post within the hIOmon thread. A brief description of these metrics is provided within the subsequent post #149.
Of course, these are all displays of the "summary" metrics, which are collected typically upon some periodic basis and obviously represent an overall summary of the I/O operation activity observed by hIOmon during that time period (and overall).
hIOmon can also be configured so that it captures an I/O operation trace of the "TRIM" control I/O operations. In this case, the captured I/O operation information will include the "Data Set Ranges (DSR)" specified by each individual TRIM control I/O operation.
A DSR identifies the starting offset (i.e., essentially the starting block/sector address for the particular set of blocks/sectors) along with the overall combined length of these blocks. A single TRIM control I/O operation can specify one or more DSRs.
So this technique can be used to explicitly identify the particular LBAs that have been requested by the TRIM control I/O operation(s) to be "trimmed".
To compare here is a delete using the same 6GB file on an X25-M 160GB drive.
The TRIM command execution is also delayed by a couple of seconds after the file is deleted.
Last edited by Ao1; 05-23-2011 at 09:50 AM.
So, is your data saying TRIMing a 6GB file takes 0.02sec on an X25-M, and 5.9sec on a Vertex 2?
johnw
That sounds about right.
Intel does nothing except for "releasing" the LBA when "trimming" the data, the SF controller does a lot more apparently.
I'll try to do the same on one of the V3's I've got, TRIM looks to behave differently on the SF-2XXX controllers.
--
edit:
@overthere
So how would deleting say 4000 files look like if the LBAs weren't contiguous, lets say there were 500 ranges?
Last edited by Anvil; 05-23-2011 at 09:28 AM.
-
Hardware:
John, I tried streaming with audio. Even if I use a Wave format (uncompressed) I can only get 1MB/s. Tracktor has a buffer so unless a large file was deleted whilst recording it would be unlikely to have an effect.
That said it is now clear that the SSD does not schedule TRIM during a low activity period. Whatever is running when it executes will become unresponsive for however long the TRIM command takes to execute. If you work with large files that might become a problem.
Delta between most-worn and least-worn Flash blocks: 4
Wear out 99%
12TB
I wonder if the Vertex2 is taking the wear out value from the 'best' of the NAND? That is, SF wear leveling may be ineffective with this usage scenario and SandForce (or OCZ's firmware tweaks) may be taking the value from the NAND with the least wear.
Intel seems to be doing it with average wear (as was said, 1% per 2TB is right in line with 5000 write/erase cycles), or they have really effective wear leveling (maybe both?).
Not with the X25-M, but maybe the 320 is different.
EDIT; with the SF drive I think there is a clue as to what happens when TRIM is executed in that its the same for compressed or uncompressed data. I'm going to guess it's mostly due to the processor on the SSD, rather than the actual delete operation.
Last edited by Ao1; 05-23-2011 at 10:27 AM.
Bookmarks