Page 3 of 4 FirstFirst 1234 LastLast
Results 51 to 75 of 89

Thread: FusionIO SLC IOdrive Benchmarks

  1. #51
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Nizzen View Post
    Not windows related
    Can you do the same thing and report the speeds with either TeraCopy or hIOmon?

  2. #52
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    Quote Originally Posted by Ao1 View Post
    Can you do the same thing and report the speeds with either TeraCopy or hIOmon?
    Sorry I can`t test it now. I do not have all the ssd`s now.

    I testet several times, and the copyspeed seemed to be right. 7gb size fil was copyed in about 6 seconds

    All other benchmarks give almost the same results.

    The hardware i used was:

    http://www.diskusjon.no/index.php?ap...tach_id=401674

    8xc300 128gb
    10x intel 160gb
    16x Hitachi 2tb


    Original thread with a few tests:

    http://www.diskusjon.no/index.php?showtopic=1255245

  3. #53
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Well I just found a simple way to bug the SH*T out of vantage and make it show random numbers. Yeah this benchmark just lost any and all validity it had in my eyes.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	bug.jpg 
Views:	665 
Size:	121.2 KB 
ID:	110553  

  4. #54
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    You can not change disk to test with the pcmark suite, only the hdd suite. I Also tested hddsuite with ramdisk 3-way ddr3 2000mzh cl6. Yes it flyes

  5. #55
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Nizzen View Post
    You can not change disk to test with the pcmark suite, only the hdd suite. I Also tested hddsuite with ramdisk 3-way ddr3 2000mzh cl6. Yes it flyes
    I clearly did not do that with a ramdisk. Ramdisks can not go 468gb/s. I caused a bug to occur in the testing. This can be caused for boot volume too, but it would be a huge PITA to do.

  6. #56
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    Where's the copy test without cache? I cannot find any related to copy speeds in Norwegian

    Don't compare other benchmarks, a real copy without cache is not comparable to anything in your first post.
    Quote Originally Posted by Nizzen View Post
    Sorry I can`t test it now. I do not have all the ssd`s now.

    I testet several times, and the copyspeed seemed to be right. 7gb size fil was copyed in about 6 seconds

    All other benchmarks give almost the same results.

    The hardware i used was:

    http://www.diskusjon.no/index.php?ap...tach_id=401674

    8xc300 128gb
    10x intel 160gb
    16x Hitachi 2tb


    Original thread with a few tests:

    http://www.diskusjon.no/index.php?showtopic=1255245
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  7. #57
    I am Xtreme
    Join Date
    Oct 2005
    Location
    Grande Prairie, AB, CAN
    Posts
    6,140
    Are you using the 2.2.1 drivers? They were released a day or two after you got your card. They add TRIM to the ioDrive should should help considerable w/ degradation.

  8. #58
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by lowfat View Post
    Are you using the 2.2.1 drivers? They were released a day or two after you got your card. They add TRIM to the ioDrive should should help considerable w/ degradation.
    Yep. Trim was actually added in 2.2.0. Everything I've posted was done on 2.2.0 so trim was enabled the entire time, which explains why there was zero degradation in my testing.

  9. #59
    I am Xtreme
    Join Date
    Oct 2005
    Location
    Grande Prairie, AB, CAN
    Posts
    6,140
    Fusion-IO has just been banging out new drivers for their cards lately. I went like 8 months before I saw a single update, not there has been like 5-7 of them in just the past few months. Who knows, maybe they've got their together. Maybe we will see a bootable card here in the near future after all.

  10. #60
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I want to settle the discussion about ioDrive RAM usage here.

    I've read all released papers on ioMemory since 2007, and their drives physically acts as an advanced parallell memory array with asyncronous latencies, presented to the OS through the driver as a block storage device.

    The onboard controller handles things like ECC and bad block management, while LBA->Phy mapping is handled in the driver, which is what takes up the bulk of the RAM usage.
    Since the host CPU is a lot faster than any storage processor (and with looser power and thermal limits), and keeping the lookup table in system memory significantly lowers the associated overhead latency, you get first bit latency comparable to raw flash latency.

    In july last year, Fusion-IO launched VSL, "Virtual Storage Layer", that allows applications like databases to optimize their access patterns to fit ioMemory. This can take care of stuff like not double-writing all entries to a database, write-read, or write-flush-wait-{next command}.

    The last thing i'll mention is storagesearch.com has for a while now made a distinction between "legacy" and "new dynasty" SSDs. Legacy is SSDs trying to fit the slots of HDDs or HDD RAIDs, while New Dynasti is a totally fresh bottom-up design to fit the new slot in the memory hierarchy. Fusion-IO is the star example of a "new dynasty" SSD. The reason RAMdrives like Acard and iRAM have failed to beat it hands down all over the board performance wise (which they should at block-level) is because of ther Legacy design and related overhead and limitations. The biggest of which are the SATA layer and storage controller.
    Last edited by GullLars; 02-06-2011 at 06:01 AM.

  11. #61
    Registered User
    Join Date
    Dec 2008
    Posts
    29
    im posting my part
    got iodrive 80gb SLC on myserver.
    interesting part is their latest driver now include TRIM

    Write Bandwidth test
    Code:
    =5G --numjobs=4 --runtime=10 --group_reporting --name=file1
    file1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
    ...
    file1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
    fio 1.50
    Starting 4 processes
    Jobs: 4 (f=4): [wwww] [100.0% done] [0K/610.1M /s] [0 /596  iops] [eta 00m:00s]
    file1: (groupid=0, jobs=4): err= 0: pid=14733
      write: io=5961.0MB, bw=610040KB/s, iops=595 , runt= 10006msec
        clat (usec): min=2950 , max=10029 , avg=6708.82, stdev=88.58
         lat (usec): min=2951 , max=10029 , avg=6709.14, stdev=88.58
        bw (KB/s) : min=151552, max=154412, per=25.02%, avg=152640.23, stdev=221.11
      cpu          : usr=0.08%, sys=1.50%, ctx=7077, majf=0, minf=112
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued r/w/d: total=0/5961/0, short=0/0/0
    
         lat (msec): 4=0.07%, 10=99.92%, 20=0.02%
    
    Run status group 0 (all jobs):
      WRITE: io=5961.0MB, aggrb=610040KB/s, minb=624681KB/s, maxb=624681KB/s, mint=10006msec, maxt=10006msec
    
    Disk stats (read/write):
      fioa: ios=0/47386, merge=0/982, ticks=0/279097, in_queue=279165, util=99.23%

    Read IOPS test

    Code:
    =5G --numjobs=4 --runtime=10 --group_reporting --name=file1
    file1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
    ...
    file1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
    fio 1.50
    Starting 4 processes
    Jobs: 4 (f=4): [wwww] [100.0% done] [0K/610.1M /s] [0 /596  iops] [eta 00m:00s]
    file1: (groupid=0, jobs=4): err= 0: pid=14733
      write: io=5961.0MB, bw=610040KB/s, iops=595 , runt= 10006msec
        clat (usec): min=2950 , max=10029 , avg=6708.82, stdev=88.58
         lat (usec): min=2951 , max=10029 , avg=6709.14, stdev=88.58
        bw (KB/s) : min=151552, max=154412, per=25.02%, avg=152640.23, stdev=221.11
      cpu          : usr=0.08%, sys=1.50%, ctx=7077, majf=0, minf=112
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued r/w/d: total=0/5961/0, short=0/0/0
    
         lat (msec): 4=0.07%, 10=99.92%, 20=0.02%
    
    Run status group 0 (all jobs):
      WRITE: io=5961.0MB, aggrb=610040KB/s, minb=624681KB/s, maxb=624681KB/s, mint=10006msec, maxt=10006msec
    
    Disk stats (read/write):
      fioa: ios=0/47386, merge=0/982, ticks=0/279097, in_queue=279165, util=99.23%
    [root@zer0 ~]#
    [root@zer0 ~]# fio --filename=/dev/fioa --direct=1 --rw=randread --bs=4k --size=5G --numjobs=64 --runtime=10 --group_reporting --name=file1
    file1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
    ...
    file1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
    fio 1.50
    Starting 64 processes
    Jobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrJobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr] [91.7% done] [495.5M/0K /s] [124K/0  iops] [eta 00m:01s]
    file1: (groupid=0, jobs=64): err= 0: pid=14770
      read : io=5136.1MB, bw=525964KB/s, iops=131491 , runt= 10001msec
        clat (usec): min=43 , max=7161 , avg=482.41, stdev=35.87
         lat (usec): min=43 , max=7161 , avg=482.59, stdev=35.87
        bw (KB/s) : min= 7192, max=24768, per=1.57%, avg=8247.87, stdev=228.14
      cpu          : usr=0.67%, sys=1.32%, ctx=1545009, majf=0, minf=1855
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued r/w/d: total=1315042/0/0, short=0/0/0
         lat (usec): 50=0.02%, 100=0.79%, 250=23.87%, 500=36.24%, 750=21.33%
         lat (usec): 1000=11.12%
         lat (msec): 2=6.58%, 4=0.05%, 10=0.01%
    
    Run status group 0 (all jobs):
       READ: io=5136.1MB, aggrb=525964KB/s, minb=538587KB/s, maxb=538587KB/s, mint=10001msec, maxt=10001msec
    
    Disk stats (read/write):
      fioa: ios=1299839/0, merge=0/0, ticks=623085/0, in_queue=622916, util=99.07%
    Read Bandwidth test

    Code:
    =5G --numjobs=4 --runtime=10 --group_reporting --name=file1
    file1: (g=0): rw=randread, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
    ...
    file1: (g=0): rw=randread, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
    fio 1.50
    Starting 4 processes
    Jobs: 4 (f=4): [rrrr] [100.0% done] [693.3M/0K /s] [676 /0  iops] [eta 00m:00s]
    file1: (groupid=0, jobs=4): err= 0: pid=14883
      read : io=7549.0MB, bw=772863KB/s, iops=754 , runt= 10002msec
        clat (usec): min=1430 , max=15027 , avg=5291.03, stdev=830.98
         lat (usec): min=1430 , max=15027 , avg=5291.32, stdev=830.98
        bw (KB/s) : min=158791, max=208896, per=25.15%, avg=194362.18, stdev=5590.36
      cpu          : usr=0.08%, sys=2.13%, ctx=7568, majf=0, minf=1140
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued r/w/d: total=7549/0/0, short=0/0/0
    
         lat (msec): 2=0.52%, 4=23.22%, 10=74.41%, 20=1.85%
    
    Run status group 0 (all jobs):
       READ: io=7549.0MB, aggrb=772863KB/s, minb=791411KB/s, maxb=791411KB/s, mint=10002msec, maxt=10002msec
    
    Disk stats (read/write):
      fioa: ios=60046/0, merge=0/0, ticks=242617/0, in_queue=242651, util=99.26%
    Write IOPS test

    Code:
    e=5G --numjobs=64 --runtime=10 --group_reporting --name=file1
    file1: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
    ...
    file1: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
    fio 1.50
    Starting 64 processes
    Jobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwJobs: 64 (f=64): [wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww] [100.0% done] [0K/609.2M /s] [0 /152K iops] [eta 00m:00s]
    file1: (groupid=0, jobs=64): err= 0: pid=18283
      write: io=6400.4MB, bw=655328KB/s, iops=163832 , runt= 10001msec
        clat (usec): min=27 , max=7390 , avg=403.79, stdev=11.18
         lat (usec): min=27 , max=7390 , avg=403.95, stdev=11.18
        bw (KB/s) : min= 9272, max=39320, per=1.47%, avg=9631.96, stdev=68.73
      cpu          : usr=0.85%, sys=1.66%, ctx=1690198, majf=0, minf=1727
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued r/w/d: total=0/1638485/0, short=0/0/0
         lat (usec): 50=1.19%, 100=2.03%, 250=6.00%, 500=88.34%, 750=1.96%
         lat (usec): 1000=0.05%
         lat (msec): 2=0.43%, 4=0.02%, 10=0.01%
    
    Run status group 0 (all jobs):
      WRITE: io=6400.4MB, aggrb=655328KB/s, minb=671056KB/s, maxb=671056KB/s, mint=10001msec, maxt=10001msec
    
    Disk stats (read/write):
      fioa: ios=0/1619264, merge=0/0, ticks=0/616410, in_queue=616012, util=99.15%
    Last edited by keiko; 03-07-2011 at 01:16 PM.

  12. #62
    Registered User
    Join Date
    Sep 2009
    Posts
    51
    are there any other devices representing the new dynasty IO stack? I know there are other pcie flash devices but at least the ones from OCZ simply package sata raid on a card (which of course is lame).
    It looks like Fusion did something right as their write latencies are out of this world hence the very high low-qd iops. I imagine a soft raid across several of these cards would best any sata raid currently available.

  13. #63
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Mr 1Hz - Is this the same as what you have?

    http://cgi.ebay.com/ws/eBayISAPI.dll...m=120660601501

  14. #64
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by SteveRo View Post
    Mr 1Hz - Is this the same as what you have?

    http://cgi.ebay.com/ws/eBayISAPI.dll...m=120660601501
    Looks just like mine, but I paid $1500 for a brand new one

    BTW 112k in vantage HDD with only 105 PCIe. 120k+ easily possible with more PCI-e clock I think, but I like my iodrive too much to do this.

  15. #65
    I am Xtreme
    Join Date
    Oct 2005
    Location
    Grande Prairie, AB, CAN
    Posts
    6,140
    Quote Originally Posted by gordy View Post
    I imagine a soft raid across several of these cards would best any sata raid currently available.
    I would do this in a heartbeat if I could actually find another ioXtreme for sale anywheres.

  16. #66
    Registered User
    Join Date
    Dec 2008
    Posts
    29
    lol regrettbly http://cgi.ebay.com/FusionIO-ioDrive...item45f928150d

    160gb going out still in auction... if someone get that for below 1000 usd its a killer deal
    kinda regrettable since my iodrive is only 80 and have like 68gb free space leftlol

  17. #67
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Quote Originally Posted by One_Hertz View Post
    Looks just like mine, but I paid $1500 for a brand new one

    BTW 112k in vantage HDD with only 105 PCIe. 120k+ easily possible with more PCI-e clock I think, but I like my iodrive too much to do this.
    How does it perform as a pcm05 HDD target?

  18. #68
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Quote Originally Posted by keiko View Post
    lol regrettbly http://cgi.ebay.com/FusionIO-ioDrive...item45f928150d

    160gb going out still in auction... if someone get that for below 1000 usd its a killer deal
    kinda regrettable since my iodrive is only 80 and have like 68gb free space leftlol
    I might bid on this if it would do well in pcm05 - Mr 1Hz?

  19. #69
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by SteveRo View Post
    I might bid on this if it would do well in pcm05 - Mr 1Hz?
    I can not get the thing to run correctly and I am not even certain why this time around. Perhaps parts of my windows are corrupt. Sorry I do not have the time to fix it right now to provide real results. I highly doubt anything flash based would match up to the speed of DRAM cache though so I would not get this for PCMarks. I don't see why someone doesn't just configure the funkycache software properly and run the entire PCMark HDD suites in RAM and just gets #1. Not that different than what is being done right now anyhow.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	pcmark05.jpg 
Views:	339 
Size:	103.2 KB 
ID:	112761  
    Last edited by One_Hertz; 03-08-2011 at 04:18 PM.

  20. #70
    Registered User
    Join Date
    Dec 2008
    Posts
    29
    good luck with ur bid, the retail price is 8000-10000 usd

  21. #71
    Registered User
    Join Date
    Oct 2006
    Location
    Kirghudu, Cowjackingstan
    Posts
    462
    Only if you could boot the damn thing... Win OS would be so happy to reside on it

    Sony KDL40 // ASRock P67 Extreme4 1.40 // Core i5 2500K //
    G.Skill Ripjaws 1600 4x2Gb // HD6950 2GB // Intel Gigabit CT PCIe //
    M-Audio Delta 2496 // Crucial-M4 128Gb // Hitachi 2TB // TRUE-120 //
    Antec Quattro 850W // Antec 1200 // Win7 64 bit

  22. #72
    Registered User
    Join Date
    Dec 2008
    Posts
    29
    yea but doubt many companies use that for boot only.
    For me i use my iodrive to store mysql database. 80gb is enough for me as ive only used 2.5 gb so far. (although i have 68gb usuable space left) and server load improved alot

  23. #73
    Registered User
    Join Date
    Oct 2006
    Location
    Kirghudu, Cowjackingstan
    Posts
    462
    lowfat and One Hertz,

    How performance of ioDrive (SLC) differs from ioXtreme (MLC)? I think I saw AS SSD of ioDrive in this thread. lowfat, could you post your ioXtreme score?

    Sony KDL40 // ASRock P67 Extreme4 1.40 // Core i5 2500K //
    G.Skill Ripjaws 1600 4x2Gb // HD6950 2GB // Intel Gigabit CT PCIe //
    M-Audio Delta 2496 // Crucial-M4 128Gb // Hitachi 2TB // TRUE-120 //
    Antec Quattro 850W // Antec 1200 // Win7 64 bit

  24. #74
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Iodrive is 2x+ faster in most workloads, except sequential reads. Access times as provided by fusion io are 26us vs 80us.

  25. #75
    I am Xtreme
    Join Date
    Oct 2005
    Location
    Grande Prairie, AB, CAN
    Posts
    6,140
    Quote Originally Posted by F@32 View Post
    lowfat and One Hertz,

    How performance of ioDrive (SLC) differs from ioXtreme (MLC)? I think I saw AS SSD of ioDrive in this thread. lowfat, could you post your ioXtreme score?
    Here is my WEI run. PCIe bus was @ 105MHz.
    > Disk Sequential 64.0 Read 935.00 MB/s 7.9
    > Disk Random 16.0 Read 922.81 MB/s 7.9
    > Responsiveness: Average IO Rate 0.42 ms/IO 7.9
    > Responsiveness: Grouped IOs 7.67 units 7.6
    > Responsiveness: Long IOs 1.79 units 7.9
    > Responsiveness: Overall 13.76 units 7.9
    > Responsiveness: PenaltyFactor 0.0
    > Disk Sequential 64.0 Write 301.00 MB/s 7.7
    > Average Read Time with Sequential Writes 0.194 ms 7.9
    > Latency: 95th Percentile 1.701 ms 7.9
    > Latency: Maximum 4.664 ms 7.9
    > Average Read Time with Random Writes 0.229 ms 7.9
    > Total Run Time 00:00:59.14

Page 3 of 4 FirstFirst 1234 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •