I never killed the card it just lost all of its data once. :p:
I've been running it @ 107MHz for nearly a year now with no issues.
Printable View
Yes 105 made things a bit faster and allowed me to get 7.9 across the board. I am really not very comfortable playing with this figure though, having killed an expensive high end gpu with it before.
Diskmark Advanced Tests:
Fileserver - 657
Database - 292
Webserver - 696
Workstation - 239
I changed the file size of all the tests to 10240MB to make it a more reliable test because in stock form some of these tests are only 2GB in size which is a bit small.
I just hammered the thing with 170GB of writes, 110GB of which were 4kb random writes through iometer. There is ZERO performance drop. It does NOT degrade. I do not understand what must be done to force it to degrade if 170gb of writes does nothing. The access times are the same, winsat is the same, random reads/writes are the same, etc.... I wonder if they recently added trim... I am on W7 x64.
It would be pretty easy for them to add their own TRIM because their driver is filesystem aware... All it has to do is keep the unused sectors fresh, which is what it seems to be doing.
edit: Oh :banana::banana::banana::banana: I was in improved write performance mode and I didn't notice. Writing another 100+GB while using full capacity...
excellent results... that 4k is stupendous! few things here...
that latency is tremendous, can you do an everest measurement? that gives some good latency numbers...(down to .001)
also, when you do the iometer 4k @QD1-8, can you give us the access times, kinda in this fashion?. all i need is the numbers though, no graph :) but i have data such as this for the 9211, 9260-8i and the 1880, and ICH so we should be able to get some good apples to apples here, i mean that thing is killer!~congrats on your acquisition, on a heavily overclocked machine that thing would jsut crush :) is there any plans for them to make it bootable?
EDIT: CPU usage would also be very interesting here... wondering what it is like with this device..
http://i517.photobucket.com/albums/u...random1880.png
LOL your a crazy man! but hey its SLC, so why not? :)Quote:
Writing another 100+GB while using full capacity...
After 120GB of random 4KB writes, I once again can not get it to degrade at all. At this point, I will definitely never reach a degraded state with my usage.
Here is everest...
edit: here is iometer like you asked computurd.
excellent. im sure there is a driver that you install for the device? or is there any type of pre-boot type interface? or is it just plug and play kinda like a single SSD device? wondering how much of the processing is offloaded to the computer, as the cpu usage seems a bit high. not worrying though, but it does have some softraid-esque cpu usage numbers, which is indicative of process offloading..but for that latency that is a negligible trade off for sure :)
wondering if it is basically a passthrough for a soft raid device? with built in nand?
of course the CPU usage is going to scale regardless with the higher the random access is....but those numbers still seem quite high..
for instance @ 4k output of 268mb/s the cpu usage of:
9211.....4.3%
9260-8i..6.77 (with FP)
1880....10%
fusion i/o 22%
so a bit of difference there...with the 9211 it doesnt scale as high with the 4k random, but the 9260 w/FP does though, so at *roughly* 470mb/s @ 4k random:
9260...11.09%
fusion...36%
interesting. will change drivers to scsiport on the 1880 for a comparison with the 4k at that high of an output, as it doesnt scale that high with the storport drivers..
even with the lower cpu usage though, the raid cards arent even close with latency of course...
here is a comparison with that in mind...4k @ 268 throughput latency
9211 --0.064
9260 --0.354
1880 --.3514
fusion --0.0580
ROFL it kills them
@ 470 ,mb/s
9260--2.0894
fusion--0.2653
Yes a driver must be installed or nothing sees it.
You are right it seems like all the calculations are done on the main CPU. It has a CPU of its own as well (the sucker gets hot too; 70C while doing 4kb random writes). This is no big deal at lower QD, but at high QD load this is not a very good 'feature' at all. By the way, the CPU is no slouch either - 4.5ghz i7. Slap this on some standard 2.8ghz AMD cpu and run some high QD work and you just might reach CPU bottleneck with the device alone!
You can not exactly compare things by bandwidth cross-queue. It is kind of apples to oranges. Higher QD will always bring higher latencies and more CPU usage.
The device is extremely strong at low QD, especially 1-4. Looking at all the Hiomon traces this is where most of the single user action happens anyway, which is why I am so happy with this toy. For high QD there are most definitely better options.
That is my understanding as well, i.e., Fusion-io provides their own OS device driver which "manages" their PCI card.
I believe that their OS device driver also requires "sufficient" system RAM to operate (at least according to the "Windows User Guide" for their ioXtreme version).
This makes me wonder about the extent, if any, to which their OS device driver uses this "required" system RAM as a cache, which has obvious performance (as well as other) implications in addition to the system CPU utilization considerations.
Regarding RAM usage, I have not been able to test this. The manual says that at worst case scenario, when writing in 4KB blocks, the iodrive can use up to 425MB of ram. Using IOMeter my RAM usage is static no matter what I do. Perhaps it is permanently reserved in some way. If someone thinks of a good way to test this then I will gladly do it.
That's interesting. In looking at the "ioXtreme User Guide for Windows" document (version 3 for driver release 1.2.7) - which might not be applicable to the ioDrive (I have no idea) - there is a table that "shows the amount of RAM required per 80GB of storage space, using various block sizes".
In short, the table shows 800MB RAM usage for an "average block size" of 4096 bytes - and 5600MB RAM usage for an "average block size" of 512 bytes. :)
In general, Windows device drivers acquire either "paged" or "nonpaged" kernel memory. Presumably then the Fusion-io device driver acquires "nonpaged" kernel memory (i.e., basically the physical RAM). I do not know the manner in which its acquires (and perhaps subsequently releases) the RAM that it requires; it might acquire the required RAM (at least some of it) dynamically depending upon its current needs.
You can take a quick look at the Windows Task Manager to see the amounts of "paged" and "nonpaged" kernel memory - although, of course, both of these values reflect the current overall sum of the respective usage by all of the various kernel mode components.
Unless he is on x64 OS, no driver can allocate that much RAM - paged or nonpaged. It must use physical memory allocations instead.
I believe it is not used as cache, but rather as a bitmap holder. It reminds me EasyCo's MFT, which does a similar thing. Cache, that small, would not be able to provide sustained 4K speeds, and this thing has 'em.
One_Hertz/ Alfaunits,
Are you saying you can’t copy a single file (like an avi) above 500MB/s if the copy is made on or from a drive with Windows installed on it? That sucks beyond believe if true. Is that a Window’s thing? Could you copy a file above 500MB/s it you did it between two drives with no OS on it? How can a benchmark show speeds above 500MB/s if it can’t be achieved in real life? (Benchmarks including WEI) :confused:
NapalmV5,
Hey how’s it going? I’ve been waiting for some video updates since you got your X25-E’s. Will check them out later. That areca has been a good investment. ;)
Right now I’m ready for Christmas. I have finally got 7.1 sound working on all my apps, including studio stuff (7/64) and BT were installing fibre optics down our road last week so a super-fast internet is on its way. Add to that a 500GB G3 in Feb and things are looking dandy.
It is a similar idea to the IOxtreme. The IOdrive just requires less RAM (which makes sense since it is a quicker SLC device).
Task manager and the resource monitor are unfortunately not useful. They show static amounts of paged and nonpaged memory no matter the workload of the IOdrive.
Yes, I think so too.
HAHAHAHA you do not know me very well.
I do not know if it is windows related. All I know is that there is some sort of limit when copying files between my X25E raid and the iodrive. There is no performance reason for it not to go at 600MB/s.
Yes there are indeed addressing limitations. I failed to mention that the ioXtreme requires for Windows either a 64-bit version of either Windows 7, Vista, or XP (according to the "ioXtreme User Guide for Windows" document), which obviously makes sense in order to address the stated RAM requirements. I believe that the ioDrive also requires a 64-bit version of the Windows OS.
My point about the apparent system RAM requirements is not that the system RAM is necessarily being used (to whatever extent) as a cache, but that there is perhaps another notable host system requirement (i.e., system RAM) along with at least some suggestion of higher CPU utilization.
In any case as you mention, such a "small" cache (i.e., used to contain sector data) would ostensibly provide limited benefit in relation to the amount (i.e., overall range) of the device sectors accessed and the amount of random I/O operation activity performed by the benchmark/workloads used.
You also raise an interesting point about the system RAM being used as a "bitmap holder". If I were forced to speculate along these lines, it could be that the system RAM (or some portion of it) is being used for some FTL (Flash Translation Layer) purposes.
One example could be "cached" directories (or some portion of them) used to map the host LBAs to their respective locations within the flash media. This approach of using the system RAM (along with the host CPU capabilities) could have a notable effect upon overall performance. (Of course, there are also other factors involved in regards to the Fusion-io PCI card approach and its remarkable performance capabilities, e.g., using a system bus interface rather than a traditional device interface protocol such as ATA).
Perhaps then the Windows OS device driver for the ioDrive acquires its required system RAM when the device driver is loaded/started - and it subsequently does not dynamically adjust the amount of system RAM that it uses (and regardless of the currently active workloads).
Nizzen, that does not apply. Use Total Commander with Big file copy enabled.
Explorer uses the windows cache, so the copy does not reflect the real copy speed.
Ah, right, IoDrive/Extreme requires x64 OS.
1.1GB used by Windows 7 does not seem high to me - I am on 1.4GB at startup and I only have MSN and Skype started.
It looks like the Windows copy handler function is not very efficient. There are lots of apps that claim to significantly improve file transfer speeds and it seems they do this with better buffer management, so maybe Windows is in fact limiting file transfer speeds.
TeraCopy
TeraCopy is a compact program designed to copy and move files at the maximum possible speed, providing the user with a lot of features
http://www.codesector.com/teracopy.php
http://lifehacker.com/5280976/five-b...e-file-copiers
ExtremeCopy
http://www.easersoft.com/
As for RAM allocation:
Resource Monitor>Memory tab> The Physical Memory section at the bottom of the page will show the amount of memory that is reserved for hardware.
System Information> Hardware Resources> Memory> will show devices using resources.
Or
Memory Pool Monitor
http://support.microsoft.com/kb/177415
I hope OH was using Total Commander as I mentioned. When Big file copy is selected (and a well sized buffer, such as 4096K), there is very little space to improve the copy speeds further. The internal Windows CopyFile/Ex API use the Windows cache, and until Vista used 64KB buffers - which is ridicolous for large file transfers.
Driver allocated memory would not show as device memory, that should only show memory reserved before Windows starts (such as integrated graphics' used memory).
Pool Mon might work, if the driver uses Pool memory - sort it by current size used by allocations, and if the top one has a few hundred MB that is it - nothing in Windows itself uses >100MB unless it's a file server, and even then it takes a ridicolous amount of file access.