Page 1 of 4 1234 LastLast
Results 1 to 25 of 82

Thread: GullLars' SSD discussion/rant corner

  1. #1
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513

    GullLars' SSD discussion/rant corner

    Like the title says, i felt we needed a thread for discussing, brainstorming, and ranting about various stuff concerning SSDs.
    I will try to make interresting discussions here and post links to vairous interresting SSD material. I like to get into technical discussions or benchmark data, but i think we should have a dedicated thread for SSD benchmarking.

    We could possibly make an SSD FAQ here, or start collecting Q's and A's for a dedicated thread. Feel free to rant about or ask about anything and everything regarding SSDs here, or post links to dedicated threads for help on subjects if you don't get answers or it takes too long

    I'll try to consolidate and post a lot of material here the comming days/weeks. For now, i have a 2000 word discussion/rant on ONFI 2, SSD scaling, some general architecture, and oppinion on upcomming generation SSDs. The main points of the post is performance density scaling and channel saturation/oversaturation.

    I include a couple of pictures to illustrate and to break the wall of text :P
    I have to upload some images, so it will take a couple of minutes before the second post is up.

  2. #2
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    ONFI 2 rant.

    Topics:
    Performance and scaling:
    *MBps/GB
    *R:W bandwidth ratio, raw and SSD aggregate, SLC vs MLC
    *dies pr channel, # of channels, and channel saturation/oversaturation
    complexity, and price

    Target markets:
    capacity points/ranges
    IOPS points/ranges

    Features and interface:
    compression, NCQ, and the clean block pool writing method
    fat/regular/slim SSDs
    3Gbps vs 6Gbps
    SATA vs SAS vs PCIe

    misc:
    SSD only RAID controllers?
    DIMM-like SSD modules?
    mixed SLC/MLC?
    Multiple port SSDs?


    Referances:
    REF#1: http://onfi.org/wp-content/uploads/2...bottleneck.pdf
    REF#2: http://www.haifa.ibm.com/conferences...pers/2_2_2.pdf



    MBps/GB, a new meteric.
    I want to start off by introducing you to a new meteric i made up a few weeks back, as this is the basis of much of my frustration with SSD companies, and is the core point of this post. Broken down, this is a simple ratio showing performance (MBps, sequential throughput) pr capacity (GB). This shows how much performance you get out of the flash you pay for, and can also be seen as performance density.
    MBps/GB is a very relevant meteric for RAID arrays, where you often focus on aggregate performance, and it's easy to get more capacity than you need for high performance, making it wastefull. It's striking meteric for defining what sort of usage the SSD is intended for.

    This is also a great meteric for showing how well performance scales at different capacity points of the same SSD line.
    According to the paper i've given as REF#1, "ONFI 2 breaks IO bottleneck", Synchronous SLC and MLC have the following

    MBps/GB given a dual plane SLC die holds 4GB, and a dual plane MLC die holds 8GB:
    SLC; (120/4=) 30MBps/GB read, (28/4=) 7MBps/GB write.
    MLC; (88/8=) 11MBps/GB read, (8/8=) 1MBps/GB write.

    Before i go on to post tables of MBps/GB for a selection of SSDs, i'll also touch on a second central point to this post wich is nicely tied up to MBps/GB.

    R:W bandwidth ratio.
    This is another simple metric (with no unit), again a basic ratio showing performance characteristics of the storage. This ratio is very important to what areas an SSD can be expected to be used in. I'll give you the raw NAND ratios for sync dual plance die SLC and MLC in REF#1.
    SLC; (120:28=) 4,29:1. (rounded up 4,3:1)
    MLC; (88:8=) 11:1.
    I will include the bandwidth ratio in the tables of MBps/GB.

    Click image for larger version. 

Name:	SSD performance density table.png 
Views:	1074 
Size:	61.2 KB 
ID:	104745
    Things worth noting:
    *no SLC SSD has higher read performance density than Async SLC NAND
    *many low-capacity MLC SSDs have higher performance density than Async NAND.
    *The only SSDs with a >2:1 R:W ratio are: ioXtreme, Intel x25-V/M, JFM612 30GB, and C300 128GB, yet Async MLC and sync both SLC and MLC have R:W > 4:1.
    *performance density scales very poorly with SSD capacity, especially for reads.


    This brings me to my next topic/segment.


    Dies pr channel, # of channels, and channel saturation/oversaturation.
    In order to illustrate this more propperly i refer to page 22 and 23 of the ONFI 2 pdf, i've taken screenshots of the images to easier illustrate here (the first was barely over 200kb, so you have to click it to see the full pic...

    Click image for larger version. 

Name:	ONFI_2_breaks_io_bottleneck p23, async vs sync MLC 2-plane.png 
Views:	1147 
Size:	189.9 KB 
ID:	104746
    In order to fully appreciate these 2 images, you should read the entire ONFI 2 pdf, it's pretty quick to skim through.
    The first thing you will notice is that src sync (source synchrounous, the ONFI 2 flash buss spec) has a huge impact on read performance for both MLC and SLC, a slight impact on write performance for SLC, and almost no impact on MLC write performance.
    You will also notice the differences make themselves more visible for more dies pr channel, especially 4 or more dies pr channel.

    Here goes a quick round of some numbers and math:
    The ONFI 2.1 buss spec (flash channel) allows 200MB/s transfer rate. Sync SLC can do 120MB/s read, 28MB/s write. Sync MLC can do 88MB/s read, 8MB/s write.
    A single SLC die will give 120/200= 60% saturation of the channel for reads, and 28/200 = 14% saturation for writes.
    Two SLC dies on a single channel will fully saturate the channel for reads with 20% oversaturation (40MB/s wasted read), and 28% saturation for writes.
    To fully saturate a single channel for writes with SLC, you need 7 dies (196MB/s = 98% saturation), at wich point you oversaturate the channel for reads by 320%, wasting 640MB/s potential write speed.
    A single MLC die will give 44% channel saturation for reads, and 4% for writes.
    Two MLC dies will give 88% channel saturation for reads, and 8% for writes.
    Three MLC dies will fully saturate the channel, with 32% oversaturation (64MB/s wasted read), and 12% saturation for writes.
    To fully saturate a single channel for write with MLC, you need 25 dies (200MB/s), at wich point you oversaturate the channel by 1000% for reads, wasting 2000MB/s potential read speed.

    From these numbers, i would say 1-2 SLC dies pr channel, and 2 MLC dies pr channel are fairly desirable to avoid wasting bandwidth, and securing a high performance density. Most consumer SSDs today have multiple dies pr channel, and likely slower channels than 200MB/s. This leads to oversaturation of the channels (or limiting controller throughput) and wasted bandwidth, resulting in low performance density for higher capacity drives.

    Now on to the next part of this topic/segment, # of channels.
    Many current SSDs use 4 channels, some use 8 or 10. As long as SSDs use SATA 3Gbps, it can be (over)saturated with 2 ONFI 2.1 channels (200MB/s) for reads. SATA 6Gbps can be saturated with 3 channels for reads.
    Because of the correlation between # of channels and read IOPS, 2-3 channels may not be desirable, since it would limit read IOPS to 15-30K.
    You could increase IOPS whitout oversaturating the SATA buss by only using 1 die pr channel.

    On SATA 3Gbps, this would allow 3 channels of single die SLC (20% oversaturation for sequential read), up to 300MB/s read, 30K IOPS read, 84MB/s write, and 20K IOPS write (12GB capacity). With 3 channels of single die MLC, you'd get 264MB/s read, <22K IOPS read, 24MB/s write, and 6K IOPS write (24GB capacity).
    This all result in low capacities and low write speeds to keep from oversaturating the SATA 3Gbps buss, so a bump up to 6Gbps will push the problem a bit back.

    With 6Gbps and SLC, you could have 5 channels with single die SLC before saturating the buss for reads, at wich point you have 600MB/s read, 140MB/s write, <50K IOPS read, <35K IOPS write, 20GB capacity. Good IOPS, but low capacity.
    With 6Gbps and MLC, you could have 7 channels with single die MLC and barely saturate the buss for reads, with 600MB/s 616) read, 56MB/s write, <50K IOPS read, <14K IOPS write, 56GB capacity. Again, good IOPS, a bit better but still low capacity, and very low write.

    As a conclusion, it seems inevitable to oversaturate SATA 6Gbps for reads to reach usefull capacity points and/or write speeds above entry-level/RAID drives.
    There's also the factor that controller complexity and cost increases with # of flash channels. From a cost vs performance density perspective, if you can get by with fairly low capacity and write speeds, 3 channels and 2 dies pr channel seems to be a sweet-spot for 6Gbps drives, regardless of SLC vs MLC. At that point, you also get 20-30K read IOPS, wich could qualify as "good enough" for a lot of uses.
    3 channels 2 dies pr channel SLC: 24GB, up to 600MB/s read, 170MB/s write, 30K IOPS read, 40K IOPS write.
    3 channels 2 dies pr channel MLC: 42GB, up to 530MB/s read, 42MB/s write, 20K IOPS read, 10K IOPS write.
    Slightly /drool specs for RAID or OS+apps drives. Counting in controller complexity and ammount of flash, the MLC could cost below $200, and the SLC below $300.


    So, with so difficult ballancing between capacity, complexity, R:W ratio, and random/seq performance, what targets are there for next generation SSDs? I'll try to draw some outlines for what i think.
    *OS+application drive: 32-128GB, SATA 6Gbps, max seq read and read IOPS (<100K), >10K IOPS write, write 1 MBps/GB.
    *RAID member drive: 32-128GB, SATA/SAS 6Gbps
    *Laptop as only drive: 32-512GB, SATA 3/6Gbps
    *Database drive 32GB-1TB: SATA/SAS 6Gbps or PCIe 2.0
    *Workstation scratch-disk / active data: 64GB-nTB, sATA/SAS 6Gbps or PCIe 2.0



    Features and interface.
    IMHO, NCQ should be standard on all SSD using more than 1 channel and costing over $100. It's neccessary to fully utilize the parallell nature of SSDs, but in the lowest end of low-cost, the 5000-7500 read IOPS possible to get whitout NCQ could be "good enough".

    The "clean block pool writing method", wich is just like it sounds, a pool of pre-erased blocks where random LBAs can be steamed sequentially to, is something i think will become common practice, and already is implemented by Intel, Micron, and SandForce (i strongly suspect, not confirmed). It's described in REF#2, here's a picture i "loaned" from Anandtech's SSD anthology to illustrate:
    Click image for larger version. 

Name:	Simplified illustration of clean block pool writing method.png 
Views:	982 
Size:	43.5 KB 
ID:	104747
    This basically allows the SSD to write random IOs at the same speed as sequential data, with a minor overhead penalty.

    To avoid sustaining massive ammounts of random IOs and shortening drive life or high power draw, there could be implimented a sollution where you have full speed for random writes up to 100K IO's within 10 seconds, and after that 10K sustained untill the average is under 500-600K/min.

    I also think more manufactureres should consider compression. It's a good performance booster for os+apps and many database usage patterns, and the space saved by compression acts as dynamic spare area to increase sustainable performance and drive longevity. It's a welcome write boost for lower capacity MLC boot drives, and could give many times write speed bonus for very compressible files (empty files could literary get a 10x write speed boost on some configurations).

    Skinny/regular/fat SSDs is a term coined by Zslot kerekes, editor of Storagesearch.com. It reffers to ratio of RAM to flash. Link to article for details: http://www.storagesearch.com/ram-in-flash-ssd.html
    I think it would be interresting if some manufacturer made a fat SSD with the RAM as a combination of burst-write buffer and read-cache. Fat SSDs are defined as 1% of total capacity or more in RAM size. I think this would be interresting for higher end OS+App drives.

    Something i haven't discussed yet is the benefit of SAS over SATA. SAS is full duplex, and can sustain 6Gbps in both directions, wich is usefull if the SSD is able to oversaturate the interface for reads, since you could then write some to the drive whitout loosing much, if any, read speed. Because of the full duplex, it's also superior for mixed random read/write IOPS latancy. The downside of SAS is ofcause that it's not compatible with SATA HBAs and most motherboards.

    There's also a strenghtened case for PCIe interface for upcomming SSDs, since it's possible to saturate 6Gbps interfaces with 3 channels (or 5-7 dies). With PCIe 2.0 x4 or higher, you have 2000MB/s or higher bandwidth availible. There's also the lower latency benefit, and avoiding HBAs as bottlenecks. By using compression togheter with a PCIe 2.0 interface, some files could get a significant speed boost.

    In order to saturate a PCIe 2.0 x16 link whitout compression for reads, you need 40 saturated flash channels. With 2 MLC dies pr channel, you'd need 45 channels / 90 dies. With 8GB pr die, that's 720GB.If set up in with double-stacked packages with 2 dies pr package, it could be a 6 long 4 wide grid arrangement with one spot missing, wich should fit a card smaller than a full size graphics card.



    Over to hypothetical thought experiments.
    Speaking of compression and PCIe busses, there's a case to be made for a RAID controller with on-the-fly compression, like SandForce. It would neccesitate an additional level of abstraction, but could boost the speed of drives or arrays signifficantly for highly compressible files, provided a powerfull enough processor or co-processor to handle on-the-fly compression at such speed.

    I'll drop DIMM-like SSD and mixed SLC/MLC setups for now, since the post went a bit long.


    The last thing i'll mention as a hypothetical is a SSD RAID-box, speciffically made as a 3,5" or 5,25" drive with one or more 4x mini-SAS 6Gbps ports on the back. The back may be pass-through for multiple 3-4 channel SSDs inside the box sharing PCBs.
    A quick mental model tells me you could fit roughly 16 such 3-4 channel SSDs in a 5,25" box, 4 PCBs with 4 modules each, each PCB having a 4x mini-SAS 6Gbps port, and all 4 PCBs sharing a 1-2 molex power plugs.
    A 3.5" box could probably hold 8 (3-channel) SSD modules and have 2 4x mini-SAS ports.


    So, what do you guys think? :P
    Last edited by GullLars; 05-25-2010 at 10:02 PM. Reason: Format conflict from notepad to BB code

  3. #3
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    GullLars - nice article - I will need to read it a couple times more with the backup.
    Interesting new metrics - is there a limit to how much can be parallelized within an SSD - how many channels an SSD can include?
    Seems like I read that x25-m has 10 channels - is that the most so far?

  4. #4
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Nice thread GullLars and as usually some great insights. I’m looking forward to seeing this thread develop.

  5. #5
    Xtreme Member
    Join Date
    Nov 2007
    Posts
    227
    Amazing and Great Idea. I did similar on another Site which has been hit well over a million times now and I am not a techy per sae compared to some of you with abs brilliant ideas as seen in the start of this thread. Im here!!!!...if its ok.

  6. #6
    Banned
    Join Date
    May 2009
    Posts
    676
    *no SLC SSD has higher read performance density than Async SLC NAND
    *many low-capacity MLC SSDs have higher performance density than Async NAND.
    few things,
    1.
    the basic purpose of Micron's document is showing the differences and benefits of ONFI 2 standard, adding to it some architecture concerns involving no. of dies per channel and number of channels per die,
    asynchronous NAND interface, is the ONFI 1 standard devices,
    ONFI 2, support synchronous interface and so is much faster.

    2.
    what do you want to say by that?
    going numbers over numbers can be endless and not really necessary,
    calculating the ratio's between read and write bandwidth and MLC vs SLC in respect to that,
    fewer channels vs many, many dies vs less, dies vs channels and whatever,
    i feel like this freaking robot from star-trek.

    now for the rest,

    one question,
    why not dealing with what is?
    what isn't, simply, isn't.
    there are many things yet to understand about IC's, signaling, pins, general PCB build, controller architecture, nand quality, companies that are involved with manufacturing and they're agenda, new technologies and the future of nano-electronics, etc. etc.

    this post is not about understanding how the technology works, but rather on the out come of it.
    the first question you should ask yourself before pointing out such idea's is, why havn't anyone has tried it already, or, possibly that someone is already trying it..?

    now,
    there are 2 reasons why not sharing any idea with companies,
    1.
    you would not like anyone to take it, yet, posting it in a known forum does the opposite,
    2.
    you are unsure whether this is a good idea or not and so, you ask people to share they're thoughts.

    and why instead of wasting time talking about these things, not just go over OCZ or mushkin forum and ask people there how feasible your idea's are..
    send an e-mail to anand or bug one of the guys from one of the companies by fishing they're mail from some letter they have made or anything,
    you will never go forward by drawing a herd of enthusiasts after you,
    you will go forward when you march alone.

  7. #7
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Onex, I have no financial stake in SSD technology, and the post is just me airing my thoughts and frustrations.
    I don't have any problem with companies reading this thread, and finding something interresting or taking ideas from here, if it results in better products, i'd say it's a good thing.
    I ask people to share their thoughts, because it's enlightening and fun discussing technology, not because i'm uncertain about my ideas (i won't say i'm certain all are good ideas though, but i find them interresting).
    EDIT: like it says at the beginning of the secon post "ONFI 2 rant", it's not a research/technical paper or conventional article


    I have another table i feel like sharing, giving an overview of SSD controllers and the tech in them. This is a work in progress, and i've colored the things i don't have info on or are uncertain about yellow. Take the numbers as what they are, rough numbers for an overview.
    It's sorted by manufacturers and controller series.
    Click image for larger version. 

Name:	SSD controller specs and features table.png 
Views:	942 
Size:	82.1 KB 
ID:	104756
    If you have or come across info for the yellow squares, pleese post so i can update it
    Last edited by GullLars; 05-26-2010 at 06:28 AM.

  8. #8
    Banned
    Join Date
    May 2009
    Posts
    676
    i just think you are digging the same ground again and again with no real purpose,
    i don't discourage you though.

  9. #9

  10. #10
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    That's the site i used for the JMicron specs, but it says nothing about the cache size or spare area, and i didn't find info on capacity range (just number of devices supported).
    I didn't find numbers for random read IOPS on the JMF601-602 either. I know it's out there, but it drowned in the searches i did.

  11. #11
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    You can upgrade the firmware on the X25-E and they do release new versions from time to time, but it doesn't do much (anything?) for performance or otherwise in my experience.

    And where did you get 40k 4kb write IOPs for the E? I am hard pressed to get 17k out of mine. 65k-70k for 4 of them in R0.
    Last edited by One_Hertz; 05-26-2010 at 08:19 AM.

  12. #12
    PCMark V Meister
    Join Date
    Dec 2009
    Location
    Athens GR
    Posts
    771
    intels dont have any cache..

  13. #13
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    From diskusjon.no, ssd benchmark thread, IOmeter single drive results top10 list
    1. Ourasi : Read: 39945.13, Write 43352.82, Workstation 21163,62, X25-E G1 32gb ICH9R
    I've also seen screenshots of CDM 3.0 and AS SSD where it got >160MB/s for 4KB random write QD 32/64. I couldn't find one right now, i'll check if ourasi cares to run a quick round of CDM3.0 on this 2R0 x25-E ICH10R array as a referance. Intel's data sheet for the x25-E lists it as 3.3K random write IOPS ...
    Intel SSDs don't use the external cache for user data, but there is one, and the size is what's listed in their data sheets.

    EDIT: anvil hasn't got around to upload results for his LE's or C300's as single drive yet, so ourasi still holds the 1st place at single drive IOmeter score.

  14. #14
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    OCZ Core Specs:
    Read (ATTO Disk Bench) 132.888 MB/s
    Write (ATTO Disk Bench) 89.587 MB/s

    Reality: http://www.alternativerecursion.info/?p=106

    http://www.anandtech.com/show/2614/10

    Finding good data on the JMicron JMF602 controller is nearly impossible, but from what I've heard it's got 16KB of on-chip memory for read/write requests. By comparison, Intel's controller has a 256KB SRAM on-die.

    The OCZ Core used 8GB NAND chips from Samsung.

    http://www.madshrimps.be/vbulletin/f...c-write-56856/

    In the case of SSDs, some of the capacity is reserved for formatting and redundancy for wear leveling. These reserved areas on an SSD may occupy up to 5% of the drive’s storage capacity. On the Core V2 Series the new naming convention reflects this and the 30 is equivalent to 32GB, the 60 is equivalent to the 64GB and so on.


    Some more bits here: http://hothardware.com/Articles/Four...-Mtron/?page=2

  15. #15
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Quote Originally Posted by One_Hertz View Post
    And where did you get 40k 4kb write IOPs for the E? I am hard pressed to get 17k out of mine. 65k-70k for 4 of them in R0.
    http://www.diskusjon.no/index.php?ap...tach_id=369239
    That's a screenshot of an x25-E in use in a webserver for months whitout cleaning. 40K IOPS 4KB random write in CDM 3.0
    Ourasi also got >40K IOPS random write in IOmeter over a year ago when he benched one single.

    EDIT: thanks for the data audienceofone, i'll add it to a revised version of the table later.
    Any thoughts on the rant in post #2, or the meteric MBps/GB? I'm wondering if you think "performance density" could be an appropriate name.
    On a related theme, i'll try to make some graphs of IOPS/average accesstime scaling with queue depth for a selection of drives and arrays later, but that will be a big post with nice graps and stuff.
    Last edited by GullLars; 05-26-2010 at 12:59 PM.

  16. #16
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    I never tried CDM, but certainly none of the 4 of my drives get anywhere near that. IOMeter?

  17. #17
    Banned
    Join Date
    May 2009
    Posts
    676
    I've also seen screenshots of CDM 3.0 and AS SSD where it got >160MB/s for 4KB random write QD 32/64. I couldn't find one right now, i'll check if ourasi cares to run a quick round of CDM3.0 on this 2R0 x25-E ICH10R array as a referance. Intel's data sheet for the x25-E lists it as 3.3K random write IOPS ...
    first tip:
    http://www.google.com/images?um=1&hl...=&oq=&gs_rfai=

    gives a great help almost everywhere..

    And where did you get 40k 4kb write IOPs for the E? I am hard pressed to get 17k out of mine. 65k-70k for 4 of them in R0.
    you talk about 4KB write with QD of 1 which stands for ~16K iops,
    lars is talking about QD of 32 which stands for ~200MBps or 50K IOPs
    Last edited by onex; 05-26-2010 at 02:50 PM.

  18. #18
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    The new google layout has had me confused for some time, i didn't think to do an image search... :P I was searching for benches or reviews of x25-E, and didn't find anything that way.
    As you can see by the pictures, at QD 32, the x25-E deserves the E in the name, especially if you factor in when it was released (2008). It's still ridiculusly fast compared to any drive from a 3Gbps controller. Only the newest controllers that barely has hit the market can put up a fight, and at a cost. If you use random data (not compressible), the only drive on the consumer market that can beat it is C300 256GB, and for some workloads the 128GB too. The SandForce drives and C300 128GB top out around 130-140MB/s for writes, limiting write IOPS to 30-35K (again, for incompressible data).
    This should be nicely reflected in AS SSD total scores. The x25-E gets around 550 points from ICH10R. C300 256GB gets more like 650-700 points, and C300 128GB is around 580.

  19. #19
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by onex View Post
    you talk about 4KB write with QD of 1 which stands for ~16K iops,
    lars is talking about QD of 32 which stands for ~200MBps or 50K IOPs
    No actually I am talking queue 32... None of my X25-Es can do over 17k 4kb 100% random writes on ICH10R using IOMeter. Perhaps something is up with my config.

  20. #20
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I'd have that checked out. If you can split the RAID whitout too much hazzle, try a single drive in AHCI mode, and post AS SSD results + CrystalDiskInfohere.

  21. #21
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Would be very annoying to split as its by boot array

    I tried googling but I couldn't find any iometer results.

  22. #22
    Banned
    Join Date
    May 2009
    Posts
    676
    No actually I am talking queue 32... None of my X25-Es can do over 17k 4kb 100% random writes on ICH10R using IOMeter. Perhaps something is up with my config.
    boy, that is ODD.

    http://www.xtremesystems.org/forums/...&postcount=714

  23. #23
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    This is all CM... I am talking IOMeter.

  24. #24
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    One Hertz,

    Feel free to download some runs on one of my Es, Link

    It includes result files for both read and write. (aligned 4KB)

    Hard to tell what could be wrong with your config, the drive in the screenshot linked by GullLars was actually misaligned as well.
    Both my Es are available for a few benchmarks if there's anything you'd like to see.
    -
    Hardware:

  25. #25
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Well that is interesting. I've had Es ever since they came out, on multiple computers, and have never gotten anywhere close to those results on any configs. My read results are exactly the same as the ones presented by your guys, but 4k writes are way off.

Page 1 of 4 1234 LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •