Can you do the same thing and report the speeds with either TeraCopy or hIOmon?
Printable View
Sorry I can`t test it now. I do not have all the ssd`s now.
I testet several times, and the copyspeed seemed to be right. 7gb size fil was copyed in about 6 seconds :)
All other benchmarks give almost the same results.
The hardware i used was:
http://www.diskusjon.no/index.php?ap...tach_id=401674
8xc300 128gb
10x intel 160gb
16x Hitachi 2tb
Original thread with a few tests:
http://www.diskusjon.no/index.php?showtopic=1255245
Well I just found a simple way to bug the SH*T out of vantage and make it show random numbers. Yeah this benchmark just lost any and all validity it had in my eyes.
You can not change disk to test with the pcmark suite, only the hdd suite. I Also tested hddsuite with ramdisk 3-way ddr3 2000mzh cl6. Yes it flyes:yepp:
Are you using the 2.2.1 drivers? They were released a day or two after you got your card. They add TRIM to the ioDrive should should help considerable w/ degradation.
Fusion-IO has just been banging out new drivers for their cards lately. I went like 8 months before I saw a single update, not there has been like 5-7 of them in just the past few months. Who knows, maybe they've got their :banana::banana::banana::banana: together. Maybe we will see a bootable card here in the near future after all. http://smiliesftw.com/x/fingersx.gif
I want to settle the discussion about ioDrive RAM usage here.
I've read all released papers on ioMemory since 2007, and their drives physically acts as an advanced parallell memory array with asyncronous latencies, presented to the OS through the driver as a block storage device.
The onboard controller handles things like ECC and bad block management, while LBA->Phy mapping is handled in the driver, which is what takes up the bulk of the RAM usage.
Since the host CPU is a lot faster than any storage processor (and with looser power and thermal limits), and keeping the lookup table in system memory significantly lowers the associated overhead latency, you get first bit latency comparable to raw flash latency.
In july last year, Fusion-IO launched VSL, "Virtual Storage Layer", that allows applications like databases to optimize their access patterns to fit ioMemory. This can take care of stuff like not double-writing all entries to a database, write-read, or write-flush-wait-{next command}.
The last thing i'll mention is storagesearch.com has for a while now made a distinction between "legacy" and "new dynasty" SSDs. Legacy is SSDs trying to fit the slots of HDDs or HDD RAIDs, while New Dynasti is a totally fresh bottom-up design to fit the new slot in the memory hierarchy. Fusion-IO is the star example of a "new dynasty" SSD. The reason RAMdrives like Acard and iRAM have failed to beat it hands down all over the board performance wise (which they should at block-level) is because of ther Legacy design and related overhead and limitations. The biggest of which are the SATA layer and storage controller.
im posting my part :)
got iodrive 80gb SLC on myserver.
interesting part is their latest driver now include TRIM
Write Bandwidth test
Code:=5G --numjobs=4 --runtime=10 --group_reporting --name=file1
file1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
...
file1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
fio 1.50
Starting 4 processes
Jobs: 4 (f=4): [wwww] [100.0% done] [0K/610.1M /s] [0 /596 iops] [eta 00m:00s]
file1: (groupid=0, jobs=4): err= 0: pid=14733
write: io=5961.0MB, bw=610040KB/s, iops=595 , runt= 10006msec
clat (usec): min=2950 , max=10029 , avg=6708.82, stdev=88.58
lat (usec): min=2951 , max=10029 , avg=6709.14, stdev=88.58
bw (KB/s) : min=151552, max=154412, per=25.02%, avg=152640.23, stdev=221.11
cpu : usr=0.08%, sys=1.50%, ctx=7077, majf=0, minf=112
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=0/5961/0, short=0/0/0
lat (msec): 4=0.07%, 10=99.92%, 20=0.02%
Run status group 0 (all jobs):
WRITE: io=5961.0MB, aggrb=610040KB/s, minb=624681KB/s, maxb=624681KB/s, mint=10006msec, maxt=10006msec
Disk stats (read/write):
fioa: ios=0/47386, merge=0/982, ticks=0/279097, in_queue=279165, util=99.23%
Read IOPS test
Read Bandwidth testCode:=5G --numjobs=4 --runtime=10 --group_reporting --name=file1
file1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
...
file1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
fio 1.50
Starting 4 processes
Jobs: 4 (f=4): [wwww] [100.0% done] [0K/610.1M /s] [0 /596 iops] [eta 00m:00s]
file1: (groupid=0, jobs=4): err= 0: pid=14733
write: io=5961.0MB, bw=610040KB/s, iops=595 , runt= 10006msec
clat (usec): min=2950 , max=10029 , avg=6708.82, stdev=88.58
lat (usec): min=2951 , max=10029 , avg=6709.14, stdev=88.58
bw (KB/s) : min=151552, max=154412, per=25.02%, avg=152640.23, stdev=221.11
cpu : usr=0.08%, sys=1.50%, ctx=7077, majf=0, minf=112
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=0/5961/0, short=0/0/0
lat (msec): 4=0.07%, 10=99.92%, 20=0.02%
Run status group 0 (all jobs):
WRITE: io=5961.0MB, aggrb=610040KB/s, minb=624681KB/s, maxb=624681KB/s, mint=10006msec, maxt=10006msec
Disk stats (read/write):
fioa: ios=0/47386, merge=0/982, ticks=0/279097, in_queue=279165, util=99.23%
[root@zer0 ~]#
[root@zer0 ~]# fio --filename=/dev/fioa --direct=1 --rw=randread --bs=4k --size=5G --numjobs=64 --runtime=10 --group_reporting --name=file1
file1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
...
file1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.50
Starting 64 processes
Jobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr] [91.7% done] [495.5M/0K /s] [124K/0 iops] [eta 00m:01s]
file1: (groupid=0, jobs=64): err= 0: pid=14770
read : io=5136.1MB, bw=525964KB/s, iops=131491 , runt= 10001msec
clat (usec): min=43 , max=7161 , avg=482.41, stdev=35.87
lat (usec): min=43 , max=7161 , avg=482.59, stdev=35.87
bw (KB/s) : min= 7192, max=24768, per=1.57%, avg=8247.87, stdev=228.14
cpu : usr=0.67%, sys=1.32%, ctx=1545009, majf=0, minf=1855
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=1315042/0/0, short=0/0/0
lat (usec): 50=0.02%, 100=0.79%, 250=23.87%, 500=36.24%, 750=21.33%
lat (usec): 1000=11.12%
lat (msec): 2=6.58%, 4=0.05%, 10=0.01%
Run status group 0 (all jobs):
READ: io=5136.1MB, aggrb=525964KB/s, minb=538587KB/s, maxb=538587KB/s, mint=10001msec, maxt=10001msec
Disk stats (read/write):
fioa: ios=1299839/0, merge=0/0, ticks=623085/0, in_queue=622916, util=99.07%
Write IOPS testCode:=5G --numjobs=4 --runtime=10 --group_reporting --name=file1
file1: (g=0): rw=randread, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
...
file1: (g=0): rw=randread, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
fio 1.50
Starting 4 processes
Jobs: 4 (f=4): [rrrr] [100.0% done] [693.3M/0K /s] [676 /0 iops] [eta 00m:00s]
file1: (groupid=0, jobs=4): err= 0: pid=14883
read : io=7549.0MB, bw=772863KB/s, iops=754 , runt= 10002msec
clat (usec): min=1430 , max=15027 , avg=5291.03, stdev=830.98
lat (usec): min=1430 , max=15027 , avg=5291.32, stdev=830.98
bw (KB/s) : min=158791, max=208896, per=25.15%, avg=194362.18, stdev=5590.36
cpu : usr=0.08%, sys=2.13%, ctx=7568, majf=0, minf=1140
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=7549/0/0, short=0/0/0
lat (msec): 2=0.52%, 4=23.22%, 10=74.41%, 20=1.85%
Run status group 0 (all jobs):
READ: io=7549.0MB, aggrb=772863KB/s, minb=791411KB/s, maxb=791411KB/s, mint=10002msec, maxt=10002msec
Disk stats (read/write):
fioa: ios=60046/0, merge=0/0, ticks=242617/0, in_queue=242651, util=99.26%
Code:e=5G --numjobs=64 --runtime=10 --group_reporting --name=file1
file1: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
...
file1: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.50
Starting 64 processes
Jobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww] [100.0% done] [0K/609.2M /s] [0 /152K iops] [eta 00m:00s]
file1: (groupid=0, jobs=64): err= 0: pid=18283
write: io=6400.4MB, bw=655328KB/s, iops=163832 , runt= 10001msec
clat (usec): min=27 , max=7390 , avg=403.79, stdev=11.18
lat (usec): min=27 , max=7390 , avg=403.95, stdev=11.18
bw (KB/s) : min= 9272, max=39320, per=1.47%, avg=9631.96, stdev=68.73
cpu : usr=0.85%, sys=1.66%, ctx=1690198, majf=0, minf=1727
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=0/1638485/0, short=0/0/0
lat (usec): 50=1.19%, 100=2.03%, 250=6.00%, 500=88.34%, 750=1.96%
lat (usec): 1000=0.05%
lat (msec): 2=0.43%, 4=0.02%, 10=0.01%
Run status group 0 (all jobs):
WRITE: io=6400.4MB, aggrb=655328KB/s, minb=671056KB/s, maxb=671056KB/s, mint=10001msec, maxt=10001msec
Disk stats (read/write):
fioa: ios=0/1619264, merge=0/0, ticks=0/616410, in_queue=616012, util=99.15%
are there any other devices representing the new dynasty IO stack? I know there are other pcie flash devices but at least the ones from OCZ simply package sata raid on a card (which of course is lame).
It looks like Fusion did something right as their write latencies are out of this world hence the very high low-qd iops. I imagine a soft raid across several of these cards would best any sata raid currently available.
Mr 1Hz - Is this the same as what you have?
http://cgi.ebay.com/ws/eBayISAPI.dll...m=120660601501
lol regrettbly http://cgi.ebay.com/FusionIO-ioDrive...item45f928150d
160gb going out still in auction... if someone get that for below 1000 usd its a killer deal
kinda regrettable since my iodrive is only 80 and have like 68gb free space leftlol
I can not get the thing to run correctly and I am not even certain why this time around. Perhaps parts of my windows are corrupt. Sorry I do not have the time to fix it right now to provide real results. I highly doubt anything flash based would match up to the speed of DRAM cache though so I would not get this for PCMarks. I don't see why someone doesn't just configure the funkycache software properly and run the entire PCMark HDD suites in RAM and just gets #1. Not that different than what is being done right now anyhow.
good luck with ur bid, the retail price is 8000-10000 usd ;)
Only if you could boot the damn thing... Win OS would be so happy to reside on it :shrug:
yea but doubt many companies use that for boot only.
For me i use my iodrive to store mysql database. 80gb is enough for me as ive only used 2.5 gb so far. (although i have 68gb usuable space left) and server load improved alot :)
lowfat and One Hertz,
How performance of ioDrive (SLC) differs from ioXtreme (MLC)? I think I saw AS SSD of ioDrive in this thread. lowfat, could you post your ioXtreme score?
Iodrive is 2x+ faster in most workloads, except sequential reads. Access times as provided by fusion io are 26us vs 80us.
Here is my WEI run. PCIe bus was @ 105MHz.
> Disk Sequential 64.0 Read 935.00 MB/s 7.9
> Disk Random 16.0 Read 922.81 MB/s 7.9
> Responsiveness: Average IO Rate 0.42 ms/IO 7.9
> Responsiveness: Grouped IOs 7.67 units 7.6
> Responsiveness: Long IOs 1.79 units 7.9
> Responsiveness: Overall 13.76 units 7.9
> Responsiveness: PenaltyFactor 0.0
> Disk Sequential 64.0 Write 301.00 MB/s 7.7
> Average Read Time with Sequential Writes 0.194 ms 7.9
> Latency: 95th Percentile 1.701 ms 7.9
> Latency: Maximum 4.664 ms 7.9
> Average Read Time with Random Writes 0.229 ms 7.9
> Total Run Time 00:00:59.14