Just finished flashing latest firmware. WOW.
Printable View
Just finished flashing latest firmware. WOW.
I believe this is faster at winsat in every category except max latency than 6x Acard on ICH10R
Exceptional performance at QD1 :)
Random 16KB read is faster than 64KB sequential read :up:
A few IOmeter runs would be great.
4KB random read QD1 through 8 in particular.
Also, checking for 512B vs 4KB aligned read speed would be interesting. (@QD32)
Here is vantage. IOMeter next.
I am not going to screenshot every IOMeter config because that is just a waste of time.
4KB 100% read 100% random 1 worker:
1 queue - 24557 IOPs
2 queue - 46826 IOPs
3 queue - 61824 IOPs
4 queue - 68764 IOPs
5 queue - 70786 IOPs
6 queue - 70906 IOPs
12 queue - 70840 IOPs
18 queue - 84567 IOPs
24 queue - 114701 IOPs
32 queue - 120555 IOPs
512 bytes 100% read 100% random 1 queue (real access time check) - 27916 IOPs or 0.0356ms
edit: I hate myself for not buying this for 3k when it first came out in 2008. This is A LOT faster than my 4x X25-Es.
Sexy! :D
How's it feel it normal use?
Woah, now that's what I call fast...
No, not bootable. Only paid 1500 for it off ebay. The thing was brand new never used. I think I paid 1500 total for my X25-Es as well. 1000 for the first two, and 500 for the other two (got the second pair used).
From all the tests i've ran the iodrive is like a next step up from normal SSD...
Naturally, the game loading times only dropped by a bit since those are mostly CPU limited. Around 1 second on 7 second times. L4D went from about 7.5 on X25 to 6.5 on iodrive (with a system reboot between each test). Other games show similar improvements.
I am quite happy with this iodrive. I do not see anything using a SATA cable beating this for a very long time.
Have you got those X25-Es in R0? Just for fun, could you try how fast a file copy from one to the other takes? (both directions)
TC->Big file copy would be most precise.
I was among the first (well company was...) to get this back in 2008. They f***ing took waaay too long to get it released, that it wasn't really available in 2008! We got our money back, and decided to go with X25-Es x4. Maybe stuff that runs on IoDrive is faster, but we saved considerable time not waiting for IoDrive ;) (it's the same as getting a car/house via credit or saving up and buying it - in the end, Credit costs more, but you get to use the car/house all those years as well)
wow nice, i know a few clients of mine would love one of these for mysql database servers :D
Wow :shocked: That is the fastest run time I’ve seen. (WEI) :cool: The performance/ $ cost ratio makes this a really good buy. Incredible for a single device. Are you seeing any degradation?
Copying between my two arrays is one of the first things I did haha. It goes between 450 and 520mb/s. Seems to be the same in both directions. Windows limitation?
No degradation yet. I've written 200GB to the device (there is a handy tracking tool for read/written GB in the software) and AS SSD numbers + iometer reads look identical thus far. I have not tested writes properly however. Looks like it has very good garbage collection. No, there is no trim on the device but secure erase can be done with one button through software and it is very quick. The plan is to hammer it with 100+gb of 4kb random writes and then see how it performs after.
By the way, I tested this specifically, and the ram does not temporarily boost any figures. All the performance values are 100% sustainable. No cache no nothing.
All the reviews for the device on the net are old. There have been several firmware and software updates since then which increased performance. I was NOT expecting such crazy low QD performance and 0.35ms access times...
I wouldn't trust the tool that tracks data usage in the ioAdministratior. Pretty sure it is not even close to being correct. It says I have written something like 20TB to my ioXtreme and I almost never write to it.
When my board gets back from RMA, I'll post a few ioXtreme benches to compare against. I've been trying to buy another one somewhere but they are impossible to find. :(
Anyways welcome to the Fusion-IO crew. http://smiliesftw.com/x/wave.gif
I think it is some sort of PCI-e limitation. But I can't explain it, the numbers are not close to but just a bit over the limits to make sense.
I can't get any good copy speeds on my laptop, even though I got 5 SSDs connected via eSATA and on one internal slot. (100MB/s - forget it :D)
congratz on your slc iodrive acquisition 1hz :toast:
heres my areca 1231/2mb/4x x25e
http://img338.imageshack.us/img338/699/64467359.png
oh and check my tubes for loading times metro 2033/lost planet 2
at the of metro 2033 vid you can see the whole system
Not in the queue depth 1 random reads it doesn't. That is where Fusion-IO devices shine.
My ioXtreme does 3x as much as your array.
> Disk Sequential 64.0 Read 935.00 MB/s 7.9
> Disk Random 16.0 Read 922.81 MB/s 7.9
> Responsiveness: Average IO Rate 0.42 ms/IO 7.9
> Responsiveness: Grouped IOs 7.67 units 7.6
> Responsiveness: Long IOs 1.79 units 7.9
> Responsiveness: Overall 13.76 units 7.9
> Responsiveness: PenaltyFactor 0.0
> Disk Sequential 64.0 Write 301.00 MB/s 7.7
> Average Read Time with Sequential Writes 0.194 ms 7.9
> Latency: 95th Percentile 1.701 ms 7.9
> Latency: Maximum 4.664 ms 7.9
> Average Read Time with Random Writes 0.229 ms 7.9
> Total Run Time 00:00:59.14
One_Hertz, have you tried increasing your PCIe frequency? I get huge benefits by increasing mine. I've went up to 110MHz before devices started to drop out on me.
Increasing that stuff can be dangerous and I don't have warranty on my toy. I wouldn't be surprised it is why your card died... I remember killing a GPU with like 115 pci-e two years back. I will try 105mhz and I also need to try the higher write performance formatting options.
Ignoring the people still benchmarking controller cache...
Napalm - nice vids. I've been wanting to play metro 2033 for a while now so I will probably buy it soon.
Hah, Credit Suisse are using it to speed up their trading. They will probably get their money back from it in millisonds as well.
"Credit Suisse says in this case the ioMemory will be used to speed up algorithmic trading, which is using powerful, super-fast computers to analyze and movements in the prices of stocks, bonds and other securities, and determining when to buy and sell, based on a pre-programmed set of rules and conditions. It’s a business where, as The Wall Street Journal’s Donna Kardos Yeslavich wrote in October, milliseconds count, so banks and other financial institutions are constantly on the lookout anything that can speed the process up."
Yes 105 made things a bit faster and allowed me to get 7.9 across the board. I am really not very comfortable playing with this figure though, having killed an expensive high end gpu with it before.
Diskmark Advanced Tests:
Fileserver - 657
Database - 292
Webserver - 696
Workstation - 239
I changed the file size of all the tests to 10240MB to make it a more reliable test because in stock form some of these tests are only 2GB in size which is a bit small.
I just hammered the thing with 170GB of writes, 110GB of which were 4kb random writes through iometer. There is ZERO performance drop. It does NOT degrade. I do not understand what must be done to force it to degrade if 170gb of writes does nothing. The access times are the same, winsat is the same, random reads/writes are the same, etc.... I wonder if they recently added trim... I am on W7 x64.
It would be pretty easy for them to add their own TRIM because their driver is filesystem aware... All it has to do is keep the unused sectors fresh, which is what it seems to be doing.
edit: Oh :banana::banana::banana::banana: I was in improved write performance mode and I didn't notice. Writing another 100+GB while using full capacity...
excellent results... that 4k is stupendous! few things here...
that latency is tremendous, can you do an everest measurement? that gives some good latency numbers...(down to .001)
also, when you do the iometer 4k @QD1-8, can you give us the access times, kinda in this fashion?. all i need is the numbers though, no graph :) but i have data such as this for the 9211, 9260-8i and the 1880, and ICH so we should be able to get some good apples to apples here, i mean that thing is killer!~congrats on your acquisition, on a heavily overclocked machine that thing would jsut crush :) is there any plans for them to make it bootable?
EDIT: CPU usage would also be very interesting here... wondering what it is like with this device..
http://i517.photobucket.com/albums/u...random1880.png
LOL your a crazy man! but hey its SLC, so why not? :)Quote:
Writing another 100+GB while using full capacity...
After 120GB of random 4KB writes, I once again can not get it to degrade at all. At this point, I will definitely never reach a degraded state with my usage.
Here is everest...
edit: here is iometer like you asked computurd.
excellent. im sure there is a driver that you install for the device? or is there any type of pre-boot type interface? or is it just plug and play kinda like a single SSD device? wondering how much of the processing is offloaded to the computer, as the cpu usage seems a bit high. not worrying though, but it does have some softraid-esque cpu usage numbers, which is indicative of process offloading..but for that latency that is a negligible trade off for sure :)
wondering if it is basically a passthrough for a soft raid device? with built in nand?
of course the CPU usage is going to scale regardless with the higher the random access is....but those numbers still seem quite high..
for instance @ 4k output of 268mb/s the cpu usage of:
9211.....4.3%
9260-8i..6.77 (with FP)
1880....10%
fusion i/o 22%
so a bit of difference there...with the 9211 it doesnt scale as high with the 4k random, but the 9260 w/FP does though, so at *roughly* 470mb/s @ 4k random:
9260...11.09%
fusion...36%
interesting. will change drivers to scsiport on the 1880 for a comparison with the 4k at that high of an output, as it doesnt scale that high with the storport drivers..
even with the lower cpu usage though, the raid cards arent even close with latency of course...
here is a comparison with that in mind...4k @ 268 throughput latency
9211 --0.064
9260 --0.354
1880 --.3514
fusion --0.0580
ROFL it kills them
@ 470 ,mb/s
9260--2.0894
fusion--0.2653
Yes a driver must be installed or nothing sees it.
You are right it seems like all the calculations are done on the main CPU. It has a CPU of its own as well (the sucker gets hot too; 70C while doing 4kb random writes). This is no big deal at lower QD, but at high QD load this is not a very good 'feature' at all. By the way, the CPU is no slouch either - 4.5ghz i7. Slap this on some standard 2.8ghz AMD cpu and run some high QD work and you just might reach CPU bottleneck with the device alone!
You can not exactly compare things by bandwidth cross-queue. It is kind of apples to oranges. Higher QD will always bring higher latencies and more CPU usage.
The device is extremely strong at low QD, especially 1-4. Looking at all the Hiomon traces this is where most of the single user action happens anyway, which is why I am so happy with this toy. For high QD there are most definitely better options.
That is my understanding as well, i.e., Fusion-io provides their own OS device driver which "manages" their PCI card.
I believe that their OS device driver also requires "sufficient" system RAM to operate (at least according to the "Windows User Guide" for their ioXtreme version).
This makes me wonder about the extent, if any, to which their OS device driver uses this "required" system RAM as a cache, which has obvious performance (as well as other) implications in addition to the system CPU utilization considerations.
Regarding RAM usage, I have not been able to test this. The manual says that at worst case scenario, when writing in 4KB blocks, the iodrive can use up to 425MB of ram. Using IOMeter my RAM usage is static no matter what I do. Perhaps it is permanently reserved in some way. If someone thinks of a good way to test this then I will gladly do it.
That's interesting. In looking at the "ioXtreme User Guide for Windows" document (version 3 for driver release 1.2.7) - which might not be applicable to the ioDrive (I have no idea) - there is a table that "shows the amount of RAM required per 80GB of storage space, using various block sizes".
In short, the table shows 800MB RAM usage for an "average block size" of 4096 bytes - and 5600MB RAM usage for an "average block size" of 512 bytes. :)
In general, Windows device drivers acquire either "paged" or "nonpaged" kernel memory. Presumably then the Fusion-io device driver acquires "nonpaged" kernel memory (i.e., basically the physical RAM). I do not know the manner in which its acquires (and perhaps subsequently releases) the RAM that it requires; it might acquire the required RAM (at least some of it) dynamically depending upon its current needs.
You can take a quick look at the Windows Task Manager to see the amounts of "paged" and "nonpaged" kernel memory - although, of course, both of these values reflect the current overall sum of the respective usage by all of the various kernel mode components.
Unless he is on x64 OS, no driver can allocate that much RAM - paged or nonpaged. It must use physical memory allocations instead.
I believe it is not used as cache, but rather as a bitmap holder. It reminds me EasyCo's MFT, which does a similar thing. Cache, that small, would not be able to provide sustained 4K speeds, and this thing has 'em.
One_Hertz/ Alfaunits,
Are you saying you can’t copy a single file (like an avi) above 500MB/s if the copy is made on or from a drive with Windows installed on it? That sucks beyond believe if true. Is that a Window’s thing? Could you copy a file above 500MB/s it you did it between two drives with no OS on it? How can a benchmark show speeds above 500MB/s if it can’t be achieved in real life? (Benchmarks including WEI) :confused:
NapalmV5,
Hey how’s it going? I’ve been waiting for some video updates since you got your X25-E’s. Will check them out later. That areca has been a good investment. ;)
Right now I’m ready for Christmas. I have finally got 7.1 sound working on all my apps, including studio stuff (7/64) and BT were installing fibre optics down our road last week so a super-fast internet is on its way. Add to that a 500GB G3 in Feb and things are looking dandy.
It is a similar idea to the IOxtreme. The IOdrive just requires less RAM (which makes sense since it is a quicker SLC device).
Task manager and the resource monitor are unfortunately not useful. They show static amounts of paged and nonpaged memory no matter the workload of the IOdrive.
Yes, I think so too.
HAHAHAHA you do not know me very well.
I do not know if it is windows related. All I know is that there is some sort of limit when copying files between my X25E raid and the iodrive. There is no performance reason for it not to go at 600MB/s.
Yes there are indeed addressing limitations. I failed to mention that the ioXtreme requires for Windows either a 64-bit version of either Windows 7, Vista, or XP (according to the "ioXtreme User Guide for Windows" document), which obviously makes sense in order to address the stated RAM requirements. I believe that the ioDrive also requires a 64-bit version of the Windows OS.
My point about the apparent system RAM requirements is not that the system RAM is necessarily being used (to whatever extent) as a cache, but that there is perhaps another notable host system requirement (i.e., system RAM) along with at least some suggestion of higher CPU utilization.
In any case as you mention, such a "small" cache (i.e., used to contain sector data) would ostensibly provide limited benefit in relation to the amount (i.e., overall range) of the device sectors accessed and the amount of random I/O operation activity performed by the benchmark/workloads used.
You also raise an interesting point about the system RAM being used as a "bitmap holder". If I were forced to speculate along these lines, it could be that the system RAM (or some portion of it) is being used for some FTL (Flash Translation Layer) purposes.
One example could be "cached" directories (or some portion of them) used to map the host LBAs to their respective locations within the flash media. This approach of using the system RAM (along with the host CPU capabilities) could have a notable effect upon overall performance. (Of course, there are also other factors involved in regards to the Fusion-io PCI card approach and its remarkable performance capabilities, e.g., using a system bus interface rather than a traditional device interface protocol such as ATA).
Perhaps then the Windows OS device driver for the ioDrive acquires its required system RAM when the device driver is loaded/started - and it subsequently does not dynamically adjust the amount of system RAM that it uses (and regardless of the currently active workloads).
Nizzen, that does not apply. Use Total Commander with Big file copy enabled.
Explorer uses the windows cache, so the copy does not reflect the real copy speed.
Ah, right, IoDrive/Extreme requires x64 OS.
1.1GB used by Windows 7 does not seem high to me - I am on 1.4GB at startup and I only have MSN and Skype started.
It looks like the Windows copy handler function is not very efficient. There are lots of apps that claim to significantly improve file transfer speeds and it seems they do this with better buffer management, so maybe Windows is in fact limiting file transfer speeds.
TeraCopy
TeraCopy is a compact program designed to copy and move files at the maximum possible speed, providing the user with a lot of features
http://www.codesector.com/teracopy.php
http://lifehacker.com/5280976/five-b...e-file-copiers
ExtremeCopy
http://www.easersoft.com/
As for RAM allocation:
Resource Monitor>Memory tab> The Physical Memory section at the bottom of the page will show the amount of memory that is reserved for hardware.
System Information> Hardware Resources> Memory> will show devices using resources.
Or
Memory Pool Monitor
http://support.microsoft.com/kb/177415
I hope OH was using Total Commander as I mentioned. When Big file copy is selected (and a well sized buffer, such as 4096K), there is very little space to improve the copy speeds further. The internal Windows CopyFile/Ex API use the Windows cache, and until Vista used 64KB buffers - which is ridicolous for large file transfers.
Driver allocated memory would not show as device memory, that should only show memory reserved before Windows starts (such as integrated graphics' used memory).
Pool Mon might work, if the driver uses Pool memory - sort it by current size used by allocations, and if the top one has a few hundred MB that is it - nothing in Windows itself uses >100MB unless it's a file server, and even then it takes a ridicolous amount of file access.
Sorry I can`t test it now. I do not have all the ssd`s now.
I testet several times, and the copyspeed seemed to be right. 7gb size fil was copyed in about 6 seconds :)
All other benchmarks give almost the same results.
The hardware i used was:
http://www.diskusjon.no/index.php?ap...tach_id=401674
8xc300 128gb
10x intel 160gb
16x Hitachi 2tb
Original thread with a few tests:
http://www.diskusjon.no/index.php?showtopic=1255245
Well I just found a simple way to bug the SH*T out of vantage and make it show random numbers. Yeah this benchmark just lost any and all validity it had in my eyes.
You can not change disk to test with the pcmark suite, only the hdd suite. I Also tested hddsuite with ramdisk 3-way ddr3 2000mzh cl6. Yes it flyes:yepp:
Are you using the 2.2.1 drivers? They were released a day or two after you got your card. They add TRIM to the ioDrive should should help considerable w/ degradation.
Fusion-IO has just been banging out new drivers for their cards lately. I went like 8 months before I saw a single update, not there has been like 5-7 of them in just the past few months. Who knows, maybe they've got their :banana::banana::banana::banana: together. Maybe we will see a bootable card here in the near future after all. http://smiliesftw.com/x/fingersx.gif
I want to settle the discussion about ioDrive RAM usage here.
I've read all released papers on ioMemory since 2007, and their drives physically acts as an advanced parallell memory array with asyncronous latencies, presented to the OS through the driver as a block storage device.
The onboard controller handles things like ECC and bad block management, while LBA->Phy mapping is handled in the driver, which is what takes up the bulk of the RAM usage.
Since the host CPU is a lot faster than any storage processor (and with looser power and thermal limits), and keeping the lookup table in system memory significantly lowers the associated overhead latency, you get first bit latency comparable to raw flash latency.
In july last year, Fusion-IO launched VSL, "Virtual Storage Layer", that allows applications like databases to optimize their access patterns to fit ioMemory. This can take care of stuff like not double-writing all entries to a database, write-read, or write-flush-wait-{next command}.
The last thing i'll mention is storagesearch.com has for a while now made a distinction between "legacy" and "new dynasty" SSDs. Legacy is SSDs trying to fit the slots of HDDs or HDD RAIDs, while New Dynasti is a totally fresh bottom-up design to fit the new slot in the memory hierarchy. Fusion-IO is the star example of a "new dynasty" SSD. The reason RAMdrives like Acard and iRAM have failed to beat it hands down all over the board performance wise (which they should at block-level) is because of ther Legacy design and related overhead and limitations. The biggest of which are the SATA layer and storage controller.
im posting my part :)
got iodrive 80gb SLC on myserver.
interesting part is their latest driver now include TRIM
Write Bandwidth test
Code:=5G --numjobs=4 --runtime=10 --group_reporting --name=file1
file1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
...
file1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
fio 1.50
Starting 4 processes
Jobs: 4 (f=4): [wwww] [100.0% done] [0K/610.1M /s] [0 /596 iops] [eta 00m:00s]
file1: (groupid=0, jobs=4): err= 0: pid=14733
write: io=5961.0MB, bw=610040KB/s, iops=595 , runt= 10006msec
clat (usec): min=2950 , max=10029 , avg=6708.82, stdev=88.58
lat (usec): min=2951 , max=10029 , avg=6709.14, stdev=88.58
bw (KB/s) : min=151552, max=154412, per=25.02%, avg=152640.23, stdev=221.11
cpu : usr=0.08%, sys=1.50%, ctx=7077, majf=0, minf=112
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=0/5961/0, short=0/0/0
lat (msec): 4=0.07%, 10=99.92%, 20=0.02%
Run status group 0 (all jobs):
WRITE: io=5961.0MB, aggrb=610040KB/s, minb=624681KB/s, maxb=624681KB/s, mint=10006msec, maxt=10006msec
Disk stats (read/write):
fioa: ios=0/47386, merge=0/982, ticks=0/279097, in_queue=279165, util=99.23%
Read IOPS test
Read Bandwidth testCode:=5G --numjobs=4 --runtime=10 --group_reporting --name=file1
file1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
...
file1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
fio 1.50
Starting 4 processes
Jobs: 4 (f=4): [wwww] [100.0% done] [0K/610.1M /s] [0 /596 iops] [eta 00m:00s]
file1: (groupid=0, jobs=4): err= 0: pid=14733
write: io=5961.0MB, bw=610040KB/s, iops=595 , runt= 10006msec
clat (usec): min=2950 , max=10029 , avg=6708.82, stdev=88.58
lat (usec): min=2951 , max=10029 , avg=6709.14, stdev=88.58
bw (KB/s) : min=151552, max=154412, per=25.02%, avg=152640.23, stdev=221.11
cpu : usr=0.08%, sys=1.50%, ctx=7077, majf=0, minf=112
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=0/5961/0, short=0/0/0
lat (msec): 4=0.07%, 10=99.92%, 20=0.02%
Run status group 0 (all jobs):
WRITE: io=5961.0MB, aggrb=610040KB/s, minb=624681KB/s, maxb=624681KB/s, mint=10006msec, maxt=10006msec
Disk stats (read/write):
fioa: ios=0/47386, merge=0/982, ticks=0/279097, in_queue=279165, util=99.23%
[root@zer0 ~]#
[root@zer0 ~]# fio --filename=/dev/fioa --direct=1 --rw=randread --bs=4k --size=5G --numjobs=64 --runtime=10 --group_reporting --name=file1
file1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
...
file1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.50
Starting 64 processes
Jobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr] [91.7% done] [495.5M/0K /s] [124K/0 iops] [eta 00m:01s]
file1: (groupid=0, jobs=64): err= 0: pid=14770
read : io=5136.1MB, bw=525964KB/s, iops=131491 , runt= 10001msec
clat (usec): min=43 , max=7161 , avg=482.41, stdev=35.87
lat (usec): min=43 , max=7161 , avg=482.59, stdev=35.87
bw (KB/s) : min= 7192, max=24768, per=1.57%, avg=8247.87, stdev=228.14
cpu : usr=0.67%, sys=1.32%, ctx=1545009, majf=0, minf=1855
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=1315042/0/0, short=0/0/0
lat (usec): 50=0.02%, 100=0.79%, 250=23.87%, 500=36.24%, 750=21.33%
lat (usec): 1000=11.12%
lat (msec): 2=6.58%, 4=0.05%, 10=0.01%
Run status group 0 (all jobs):
READ: io=5136.1MB, aggrb=525964KB/s, minb=538587KB/s, maxb=538587KB/s, mint=10001msec, maxt=10001msec
Disk stats (read/write):
fioa: ios=1299839/0, merge=0/0, ticks=623085/0, in_queue=622916, util=99.07%
Write IOPS testCode:=5G --numjobs=4 --runtime=10 --group_reporting --name=file1
file1: (g=0): rw=randread, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
...
file1: (g=0): rw=randread, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
fio 1.50
Starting 4 processes
Jobs: 4 (f=4): [rrrr] [100.0% done] [693.3M/0K /s] [676 /0 iops] [eta 00m:00s]
file1: (groupid=0, jobs=4): err= 0: pid=14883
read : io=7549.0MB, bw=772863KB/s, iops=754 , runt= 10002msec
clat (usec): min=1430 , max=15027 , avg=5291.03, stdev=830.98
lat (usec): min=1430 , max=15027 , avg=5291.32, stdev=830.98
bw (KB/s) : min=158791, max=208896, per=25.15%, avg=194362.18, stdev=5590.36
cpu : usr=0.08%, sys=2.13%, ctx=7568, majf=0, minf=1140
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=7549/0/0, short=0/0/0
lat (msec): 2=0.52%, 4=23.22%, 10=74.41%, 20=1.85%
Run status group 0 (all jobs):
READ: io=7549.0MB, aggrb=772863KB/s, minb=791411KB/s, maxb=791411KB/s, mint=10002msec, maxt=10002msec
Disk stats (read/write):
fioa: ios=60046/0, merge=0/0, ticks=242617/0, in_queue=242651, util=99.26%
Code:e=5G --numjobs=64 --runtime=10 --group_reporting --name=file1
file1: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
...
file1: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.50
Starting 64 processes
Jobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww] [100.0% done] [0K/609.2M /s] [0 /152K iops] [eta 00m:00s]
file1: (groupid=0, jobs=64): err= 0: pid=18283
write: io=6400.4MB, bw=655328KB/s, iops=163832 , runt= 10001msec
clat (usec): min=27 , max=7390 , avg=403.79, stdev=11.18
lat (usec): min=27 , max=7390 , avg=403.95, stdev=11.18
bw (KB/s) : min= 9272, max=39320, per=1.47%, avg=9631.96, stdev=68.73
cpu : usr=0.85%, sys=1.66%, ctx=1690198, majf=0, minf=1727
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=0/1638485/0, short=0/0/0
lat (usec): 50=1.19%, 100=2.03%, 250=6.00%, 500=88.34%, 750=1.96%
lat (usec): 1000=0.05%
lat (msec): 2=0.43%, 4=0.02%, 10=0.01%
Run status group 0 (all jobs):
WRITE: io=6400.4MB, aggrb=655328KB/s, minb=671056KB/s, maxb=671056KB/s, mint=10001msec, maxt=10001msec
Disk stats (read/write):
fioa: ios=0/1619264, merge=0/0, ticks=0/616410, in_queue=616012, util=99.15%
are there any other devices representing the new dynasty IO stack? I know there are other pcie flash devices but at least the ones from OCZ simply package sata raid on a card (which of course is lame).
It looks like Fusion did something right as their write latencies are out of this world hence the very high low-qd iops. I imagine a soft raid across several of these cards would best any sata raid currently available.
Mr 1Hz - Is this the same as what you have?
http://cgi.ebay.com/ws/eBayISAPI.dll...m=120660601501
lol regrettbly http://cgi.ebay.com/FusionIO-ioDrive...item45f928150d
160gb going out still in auction... if someone get that for below 1000 usd its a killer deal
kinda regrettable since my iodrive is only 80 and have like 68gb free space leftlol
I can not get the thing to run correctly and I am not even certain why this time around. Perhaps parts of my windows are corrupt. Sorry I do not have the time to fix it right now to provide real results. I highly doubt anything flash based would match up to the speed of DRAM cache though so I would not get this for PCMarks. I don't see why someone doesn't just configure the funkycache software properly and run the entire PCMark HDD suites in RAM and just gets #1. Not that different than what is being done right now anyhow.
good luck with ur bid, the retail price is 8000-10000 usd ;)
Only if you could boot the damn thing... Win OS would be so happy to reside on it :shrug:
yea but doubt many companies use that for boot only.
For me i use my iodrive to store mysql database. 80gb is enough for me as ive only used 2.5 gb so far. (although i have 68gb usuable space left) and server load improved alot :)
lowfat and One Hertz,
How performance of ioDrive (SLC) differs from ioXtreme (MLC)? I think I saw AS SSD of ioDrive in this thread. lowfat, could you post your ioXtreme score?
Iodrive is 2x+ faster in most workloads, except sequential reads. Access times as provided by fusion io are 26us vs 80us.
Here is my WEI run. PCIe bus was @ 105MHz.
> Disk Sequential 64.0 Read 935.00 MB/s 7.9
> Disk Random 16.0 Read 922.81 MB/s 7.9
> Responsiveness: Average IO Rate 0.42 ms/IO 7.9
> Responsiveness: Grouped IOs 7.67 units 7.6
> Responsiveness: Long IOs 1.79 units 7.9
> Responsiveness: Overall 13.76 units 7.9
> Responsiveness: PenaltyFactor 0.0
> Disk Sequential 64.0 Write 301.00 MB/s 7.7
> Average Read Time with Sequential Writes 0.194 ms 7.9
> Latency: 95th Percentile 1.701 ms 7.9
> Latency: Maximum 4.664 ms 7.9
> Average Read Time with Random Writes 0.229 ms 7.9
> Total Run Time 00:00:59.14
I picked up a 320GB MLC version of the iodrive and raided it with my 80GB SLC.
....:shocked::eek:
With 2x SLC the 4K reads would be higher , or am i mistaken ?
outrageously good numbers...
Decided to pick up one of these to play with. :D
What about softraid with a pile of acards on p67? :) hmmm ...
http://img96.imageshack.us/img96/986/iodrive.gif
Uploaded with ImageShack.us
Nice! Should be one crazy array. I was looking at getting that one too, but I've ran out of PCI-E slots.
Yes, very excited!
1hz - do you still have the 320 together with the 80?
What worked best for config set up for you?
Steve
You lucky ....... :)
Wish I could get hold of one of those.
PS!
I've sent you an email.
Anvil - I can't get my verizon email here but I will look for it when i get home.
Steve
Ok, no hurry :)
Hot damn that is a stellar price. I check for good Fusion-IO prices on eBay once every week or two. Guess I missed that one. I may end up selling my ioXtreme since I am having a hard time finding another to RAID it with.