MMM
Page 8 of 12 FirstFirst ... 567891011 ... LastLast
Results 176 to 200 of 293

Thread: Fastpath..

  1. #176
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Post with the "detailed" config:
    http://www.xtremesystems.org/forums/...4&postcount=84
    1 worker is QD 1-32, 4 workers is QD 4-128 (4x1-32).

    I'm hoping mbreslin will run the 4 workers detailed one with 1-8R0 scaling if he has the time and patience, it will be "perfect" for showing RAID scaling in the relevant QD range. Before QD 4 (1, 2, 3) there is almost nothing gained in SSD RAID anyway, and above QD 128 is unrealistic for almost any usage.
    EDIT: mbreslin, if time is a concern, you can set the run time to 9s + 1s ramp (the page to the far right in IOmeter), it shouldn't give more than 1-2% deviance max on 9260 for random-read-only.
    Last edited by GullLars; 06-01-2010 at 10:50 AM.

  2. #177
    Xtreme Enthusiast
    Join Date
    Feb 2010
    Posts
    701
    I was doing it before you posted that sir.

    I'm on set #2, 6 to go. You're a taskmaster.

    Edit: My wife is having contractions (~7 weeks early) if they turn out to be nothing I will have all day today for whatever. I'd like to get to pcmv finally but I'd also like to do full scaling on the 9211 for completion's sake. We'll see.
    Last edited by mbreslin; 06-01-2010 at 11:20 AM.
    slowpoke:
    mm ascension
    gigabyte x58a-ud7
    980x@4.4ghz (29x152) 1.392 vcore 24/7
    corsair dominator gt 6gb 1824mhz 7-7-7-19
    2xEVGA GTX TITAN
    os: Crucial C300 256GB 3R0 on Intel ICH10R
    storage: samsung 2tb f3
    cooling:
    loop1: mcp350>pa120.4>ek supreme hf
    loop2: mcp355>2xpa120.3>>ek nb/sb
    22x scythe s-flex "F"

  3. #178
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    The reason i ask for the 4 workers detailed config is compatibility with my other results in the "database", and also inherent compatibility with 2^n, n>1.

  4. #179
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    here is my runs.
    Attached Files Attached Files
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  5. #180
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Here's the detailed scaling for mbreslin's 8R0 c300 9260 FP.

    Click image for larger version. 

Name:	8R0 c300 9260FP detailed 1+4W IOPS.png 
Views:	504 
Size:	48.3 KB 
ID:	105060
    Click image for larger version. 

Name:	8R0 c300 9260FP detailed 1+4W IOPSdivAvgAcc.png 
Views:	504 
Size:	51.2 KB 
ID:	105061
    Good throughput the entire way
    best scaling from QD 1-16
    best QOS (quality of service) from QD 12-32
    >700K IOPS/accesstime from QD 14-32
    From this, I'd say this arrays ideal usage target is heavy desktop/workstation and meduim server load. Knowing the huge throughput for both read and write, and the high random write IOPS, reinforces that analysis.

    A couple of numbers to note:
    100K IOPS @ 140µs average response time @ QD 14. (715,9K IOPS/accesstime)
    151K IOPS @ 205µs average response time @ QD 31. (739,6K IOPS/accesstime)

    Graphs for computurd comming up soon.

  6. #181
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Graphs for Computurd's detailed runs:
    IOPS:
    Click image for larger version. 

Name:	Comp 8R0 vertex 9260FP detailed 1+4W IOPS.png 
Views:	530 
Size:	50.9 KB 
ID:	105065
    IOPS/average accesstime
    Click image for larger version. 

Name:	Comp 8R0 vertex 9260FP detailed 1+4W IOPSdivAvgAcc.png 
Views:	529 
Size:	50.7 KB 
ID:	105066
    Fairly even scaling over the entire 1 worker range, diminishing returns set in around QD 50.
    Best QOS from QD 6-56
    >190K IOPS/accesstime from QD 9-52
    This is a good over-all array with no speciffic focus point. Knowing throughput and random write IOPS, it's good for anything except random write heavy server use.

    A couple of numbers to note:
    56K IOPS @ 281µs average responsetime @ QD 16. (202,1K IOPS/accesstime)
    100K IOPS @ 516µs average responsetime @ QD 56. (195,4K IOPS/accesstime)
    120K IOPS @ 965µs average responsetime @ QD 116. (124,5K IOPS/accesstime)

    EDIT: I'm thinking about defining 3 segments in the QD 1-128 scale, QD 1-8 = low load, QD 8-32 = medium load, QD 32-128 = heavy load. What do you guys think? This also helps identify where the arrays have their strenghts, and compare them in the segments. (typical desktop usage would be low load, with spikes of medium load). What do you guys think?
    Last edited by GullLars; 06-01-2010 at 03:37 PM.

  7. #182
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    i agree. i would like to really thank you for the amount of time that you are putting into this project, and into our forum as a whole. you have been an excellent addition to an already great group of guys. However, your knowledge, insight, willingness to help others, and in-depth analysis is invaluable, and i'm sure all the guys would agree. thanks man!!!
    keep up the great work

    in regards to the rocketraid 640...dunno if im gonna pick it up or not. if we can get an eta on areca i will know for sure...just something to play with anyways, only 160 bucks...but i think that money might be better spent on something else considering i actually have no 6gb/s devices to really put it through its paces. but then again i grabbed the 9260 and 9211 with no 6gb/s devices and no prior knowledge and they both worked out great. might just get another 9211 for this array..when fastpath comes out for 9211 i already have a key and they are interchangeable..... getting impatient here for 6gb/s intels...or jetstream, whichever.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  8. #183
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Great to be appreciated. The reason i'm doing theese types of projects (like the 0,5-64KB QD 1-128 random read, and 4KB random read here) is because nobody else does, and i want to know. I'm basically feeding my own curiosity, and sharing it where i think others are hungry for knowledge. I think you will like what i'm working on at the side for another thread on this forum, i posted it in a zip file a couple of pages back. It was hdd, SSD, and array scaling meassured in different ways.


    Here is what i came to post:
    Anvil x25-E 2R0 9211 softRAID.
    IOPS:
    Click image for larger version. 

Name:	anvil 2R0 x25-E 9211 SR 1+4W IOPS.png 
Views:	522 
Size:	46.3 KB 
ID:	105067
    IOPS/accesstime:
    Click image for larger version. 

Name:	anvil 2R0 x25-E 9211 SR 1+4W IOPSdivAvgAcc.png 
Views:	517 
Size:	49.4 KB 
ID:	105068
    Pretty smooth scaling with 1 worker with diminishing returns starting from around QD 8 and setting in fully from QD 16, with 4 workers demonstrating the decline in scaling from diminishing returns.
    Best QOS from QD 8-28, with >140K IOPS/accesstime, with peak QD 13-18 with >160K IOPS/accesstime.
    This array has a focus point around QD 8-32, ie. medium workstation/server load or heavy multitasking at desktop, wich also correlates well with sequential throughput and random writes.

    A couple of numbers to note:
    50K IOPS @ 314µs @ QD 16. (162K IOPS/accesstime)
    75K IOPS @ 849µs @ QD 64. (88,6K IOPS/accesstime)

    Next up, i will make fine granulated ("detailed") comparisons for linear QD ranges 1-8, 8-32, and 32-128 for the arrays i have that data for.
    SteveRo: 9211 integrated and software 8R0 acard, and 1231ML software 12R0 acard.
    Tiltevros: 9260 FP 8R0 x25-M.
    mbreslin: 9260 FP 8R0 C300.
    computurd: 9260 FP 8R0 vertex 30GB.
    Anvil: 9211 softRAID 2R0 x25-E.

  9. #184
    Xtreme Enthusiast
    Join Date
    Feb 2010
    Posts
    701
    I'm going to do the full range of random read on 9211 maybe tonight or tomorrow and then I'll take another day or 2 and try to dial in another good pcmv score. After that I'm all done until the 1880 if I can sneak it in past the wife.

    Here is the detailed results for 9260 fp 1-8 drives:
    slowpoke:
    mm ascension
    gigabyte x58a-ud7
    980x@4.4ghz (29x152) 1.392 vcore 24/7
    corsair dominator gt 6gb 1824mhz 7-7-7-19
    2xEVGA GTX TITAN
    os: Crucial C300 256GB 3R0 on Intel ICH10R
    storage: samsung 2tb f3
    cooling:
    loop1: mcp350>pa120.4>ek supreme hf
    loop2: mcp355>2xpa120.3>>ek nb/sb
    22x scythe s-flex "F"

  10. #185
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Here's a finely granulated linear QD stepping comparison for low, medium, and high QD for the arrays listed in my post above.
    This time i listed IOPS and IOPS/accesstime side by side. I want your feedback on using this as a sort of IOPS = throughput and IOPS/accesstime = quality of service.
    Click image for larger version. 

Name:	low QD linear comparison.png 
Views:	740 
Size:	124.6 KB 
ID:	105079

    Click image for larger version. 

Name:	medium QD linear comparison.png 
Views:	737 
Size:	132.8 KB 
ID:	105080

    Click image for larger version. 

Name:	high QD linear comparison.png 
Views:	733 
Size:	122.6 KB 
ID:	105081


    My reasoning is looking at each IO as a product, and the delivery of IOs as the service. The accesstime would then be delivery time for each IO, and count as the quality of each IO.
    By this reasoning, if you double the number of IOs, but trippel the delivery time, you get lower quality of service (66% of previous).
    If you double the number of IOs and increase delivery time with 50%, you get better quality of service (33% higher).
    As a note, (tired, possibly temporarily flawed) logic dictates you'll always peak QOS at QD =< #channels (or # of drives in the case of HDDs) unless* there's something funky going on.
    (*EDIT: wich there is for steve's 8R0 softraid from 9211, wich peaks at QD 10...)
    (*EDIT2: i found a plausable explanation. It could be due to channel saturation and statistical distribution, if the IOPS gained in % is higher than the added accesstime in %, the QOS increases. With QD 8 for 8 channels, you'd have something like SUM(8-(8/n)), n {1, 8}. I'm a bit to tired to do it acurately now, but i'd guess around 65-70% saturation, so if increasing QD by 1 gives max 1/(previous QD) higher accesstime, and that 1 QD increase gives higher IOPS gain % from higher saturation increase than accesstime increase from the queue, you could increase QD > #channels a few steps and still get higher QOS.)

    For reference, the total number of channels for the setups above are:
    Stevero 8R0 acards: 8 channels
    Stevero 12R0 acards: 12 channels
    Anvil 2R0 x25-E: 20 channels
    Computurd 8R0 vertex: 32 channels
    Mbreslin 8R0 C300: 64 channels
    Tiltevros 8R0 x25-M: 80 channels
    This correlates nicely with the shape of the graphs, but not neccesarily with the score, since steve's acards have around or more than double the IOPS pr channel than some of the other setups.
    If i included my own 2R0 Mtron Pro (4 internal channels, but no NCQ = acting as 1 channel each), it would start out right between the C300's and acards, and peak at QD 2, and be a line at the bottom of the graphs beyond QD 8.

    Now i got to get some sleep, i'll have a look at the 1-8R0 detailed results when i get up mbreslin
    You should have enough graphs and numbers in the meantime, and if you get bored, there's the zip with the xlsx file i posted a couple of pages back
    Last edited by GullLars; 06-01-2010 at 08:03 PM.

  11. #186
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    the difference in channels between tiltevros array and mbreslin array goes a long way towards explaining the phenomena of tilt being faster at 4 worker results. the c300 spanks the x-25's a low QD numbers, where the number of channels is not as relevant. there it is just pure speed per channel that matters. when running at the four workers with NCQ the x-25's have more channels to deliver. so thats why, in my estimation, that you see the difference in their arrays more than any other single thing ( including beta firmwares/drivers)

    so as simple math we would be able to see the difference in scaling would be pretty damn accurate. tilt is getting 190k i/o compared to roughly 150k i/o on mbreslin. a difference of 22 percent. now tilt has 80 channels, compared to 64 channels on mbreslins array. that is a difference of roughly 20 percent as well. 20 percent more channels=20 percent more throughput when you are entirely saturating (via four workers) the controller. whereas in the low qd area mbreslin would be faster, but in server or enterprise type applications (which intels are specifically designed for in the first place) the intels will win. interesting.

    one more caveat of the x-25 vs. c300 situation, the c300 has a higher latency than the intels, even over ich10r if i recall correctly, that also explains the QOS differences as well.

    the vertex at 32 channels is a terrible solution that only an idiot would have spent that much money on

    as to the emphasis placed on accesstime of the arrays... i agree that technically it is going to be a feasible measurement of QOS, but, i wonder if there is too much emphasis placed upon that measurement. i mean right now what kind of ratio are we using for scoring and is that accurate? is it a 1:1 ratio or should it be like 3:1? i am not sure that the results should be so much weighted in favor of the access time. it is definitely an important factor, but how much of a factor is a very good question. when you are pushing 80k iops, does a slight increase in access time (latency) really matter that much? this will be a hard question to answer. is it really half of the speed of another array also pushing 80k iops? i can say emphatically no! at least not in a desktop, or even benching, scenario.
    and please no complicated mathematical equations. i know it makes the barefoots look like LOL

    but this info is gonna be great for making some conclusions with even if they are half-baked like mine
    Last edited by Computurd; 06-01-2010 at 08:10 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  12. #187
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    i see your edit 2 gullars and agree with that. there is naturally going to be an increase in access time as the iops and QD go up, it is inevitable. but that value of the increased access time v. increased iops is what is important. i believe that is what you are saying at least....i think right now the access times are being given too much emphasis.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  13. #188
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    On the norwegian forum, i proposed using a IOPS+(IOPS/accesstime) combination to rate desktop and workstation usage, as it places more emphasis on throughput after the accesstime passes 1ms, so it works slightly against the hard drop-off of QOS seen in the graphs above when breaching QD > channels. There would still be a drop-off, but it would be cushined by the throughput. It also allows IOPS/accesstime to give a (potentially huge) boost over simple IOPS, should accesstime stay low when scaling QD.

    As for your comment about the relevance of accesstime when pushing 80K+ IOPS, i'd say at that point accesstime becomes more important than throughput, since the throughput is allready nearing the limit of what you can actually use. By reaching the same throughput at lower QDs and thereby lower latencies, you get a better "CPU accelleration effect". The implications of that effect may allow you to get higher CPU utilization % whitout increasing parallellity of the workload, or allow you to use a weaker CPU (or fewer cores) to get the same job done. This is one of the things allowing datacenter clients to add an ioDrive f.ex. in ther SQL servers and halve the number of CPUs (like myspace did).
    http://www.fusionio.com/load/media-d...aintenance.pdf

  14. #189
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    On the norwegian forum, i proposed using a IOPS+(IOPS/accesstime) combination to rate desktop and workstation usage, as it places more emphasis on throughput after the accesstime passes 1ms, so it works slightly against the hard drop-off of QOS seen in the graphs above when breaching QD > channels. There would still be a drop-off, but it would be cushined by the throughput. It also allows IOPS/accesstime to give a (potentially huge) boost over simple IOPS, should accesstime stay low when scaling QD
    that is more along the lines of what i am thinking. i understand what you are saying about the latency becoming more important at the higher IOPS, makes sense. but i dont think it would have as large a influence in lower QD settings as we are seeing with the QOS as it stands. at higher QD i could see it as having more importance than at lower QD. but how to quantify that? the real difference would be at the server type applications. maybe make different "weighted" ratios for desktop v server? like different profiles of a sort? hard to quantify. maybe having a sliding scale? as the QD increases, so does the importance of latency?

    btw....what do you make of my comparisons to the x-25 v c300 scalings?
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  15. #190
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I agree, mostly, since they use the same sort of NAND, they should have the same physical latency, so (for random read) more channels = more throughput, given the controller can handle it.
    NN

  16. #191
    Xtreme Enthusiast
    Join Date
    Feb 2010
    Posts
    701
    I finally switched cables also reseated controller, just a curiosity, identical results. Also i never responded when you said I should tell LSI they should support the c300 Gull, I did tell them that their response was something like (I can actually look up the quote if you want): 'while your drives aren't in our matrix, controller policies have little effect in between brands and are usually relevant only to drive type, hdd vs ssd vs sas'

    Considering if I follow computurd's exact settings I'll wind up with worse results than his drives (I've tried) and using my own best tested configuration the results aren't really close (my drives leave his fairly far back in the rearview mirror) I'd say this was just a level 1 tech response from someone who didn't have much experience with ssds.
    slowpoke:
    mm ascension
    gigabyte x58a-ud7
    980x@4.4ghz (29x152) 1.392 vcore 24/7
    corsair dominator gt 6gb 1824mhz 7-7-7-19
    2xEVGA GTX TITAN
    os: Crucial C300 256GB 3R0 on Intel ICH10R
    storage: samsung 2tb f3
    cooling:
    loop1: mcp350>pa120.4>ek supreme hf
    loop2: mcp355>2xpa120.3>>ek nb/sb
    22x scythe s-flex "F"

  17. #192
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    now what we need to complete the whole fandango is some results from mr. audienceofone with his i/o extreme. his accesstime/iops numbers should be ridiculously good.

    edit: hey gullars could you put up the final graphs with hard raid only?
    Last edited by Computurd; 06-01-2010 at 09:57 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  18. #193
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Comp, I would love to post results from an i/o extreme but I don't have one. Lowfat does however.

  19. #194
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Mr. GullLars - much thanks for all the graphs - the visualization really helps
    Mr Lowfat - where are you! Paul, I agree - ioxtreme need to be in this compare.

    One more thought -
    Besides the adjustments of the controller card itself - read ahead cache, write cache, ncq, ...
    For me there appears to be three major variables that i need to combine and optimize -
    1. number of drives to attach (# of acards to attach to the 1231 in my case)
    2. stripe size - 4, 8, 16, 32, 64 or 128k (1231 choices)
    3. allocation (sometimes referred to as cluster) size - when you define your partition (partition magic and other tools let you change this - i have read).

    I have recently discovered that finding the sweet spot of the combination of these three can reap significant improvement -
    pcmv HDD test - 105638 - http://service.futuremark.com/compare?pcmv=327338

    Edit - I am still tweaking.
    Looking at the orb - both Anvil and Mike have higher scores than this one!
    Last edited by SteveRo; 06-02-2010 at 02:26 AM.

  20. #195
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Great to be appreciated

    Here are some graphs showing the scaling of 1-8R0 C300 on 9260 FP.
    Linear QD stepping, low (1-8), medium (8-32), and high (32-128) QD ranges.
    Click image for larger version. 

Name:	low QD 1-8R0 C300 9260FP linear.png 
Views:	501 
Size:	120.9 KB 
ID:	105092

    This illustrates very clearly there is little to be gained at low QD, but 2R0 has an advantage over single. 8R0 is <10% over 2R0 for QD 1-8 and <5% over 3R0, with regards to IOPS.

    Click image for larger version. 

Name:	medium QD 1-8R0 C300 9260FP linear.png 
Views:	511 
Size:	143.0 KB 
ID:	105093

    At the medium QD range, 3R0 also makes sense, while adding more than 4 drives give little return and seems to be controller limited. 8R0 is <12% over 3R0 and <5% over 4R0, with regards to IOPS.

    Click image for larger version. 

Name:	high QD 1-8R0 C300 9260FP linear.png 
Views:	502 
Size:	114.7 KB 
ID:	105094

    At the high QD range, 4R0-8R0 meshes completely and is controller limited. 3R0 also hits controller limitation from QD 56, and falls a bit below from QD 112 due to fewer channels to spread IOs over resulting in a bit higher accesstimes.


    This leads me to speculate 9260-4i with fastpath and 4R0 C300 would be a really sweet spot, if you don't need over 1000MB/s (1,4GB/s read, 800MB/s write) bandwidth and 1TB capacity.
    If you can get by with 550MB/s write and 512GB capacity, 4R0 C300 128GB on 9260-4i with fastpath may be an even sweeter spot cost/performance wise.

    EDIT: BTW, what's going on at QD 6 and 18-24? All nDrives drop at those points...
    Last edited by GullLars; 06-02-2010 at 08:00 AM.

  21. #196
    Xtreme Enthusiast
    Join Date
    Feb 2010
    Posts
    701
    That's what I was afraid of. Nice to know I only spent 100% more than I had to on storage. We'll see what the 1880 can do I guess.
    slowpoke:
    mm ascension
    gigabyte x58a-ud7
    980x@4.4ghz (29x152) 1.392 vcore 24/7
    corsair dominator gt 6gb 1824mhz 7-7-7-19
    2xEVGA GTX TITAN
    os: Crucial C300 256GB 3R0 on Intel ICH10R
    storage: samsung 2tb f3
    cooling:
    loop1: mcp350>pa120.4>ek supreme hf
    loop2: mcp355>2xpa120.3>>ek nb/sb
    22x scythe s-flex "F"

  22. #197
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Hey, remember, this is only 4KB 100% random 100% read. For larger blocks, more sequential pattern, or more write-heavy situations, your 8R0 array will have a greater lead over fewer drives.
    Care to run ATTO at max testfile size at QD 2, 4, 8 and 10? This will give a quick look into sequential performance scaling by block size and QD.

  23. #198
    Xtreme Enthusiast
    Join Date
    Feb 2010
    Posts
    701
    · Set the stripe size to the maximum

    · Set the write policy to write back

    · Set the read policy to adaptive

    · Set the I/O policy to direct

    · Leave disk cache at “unchanged”

    LSI suggestions for controller settings for 8 c300s I just received via email.

    1) I asked for them a week ago
    2) They're not remotely the best performing settings for either sequential or random read/write

    I can run more tests, it will be an hour or 2. Then 100% focus on pcmv!
    slowpoke:
    mm ascension
    gigabyte x58a-ud7
    980x@4.4ghz (29x152) 1.392 vcore 24/7
    corsair dominator gt 6gb 1824mhz 7-7-7-19
    2xEVGA GTX TITAN
    os: Crucial C300 256GB 3R0 on Intel ICH10R
    storage: samsung 2tb f3
    cooling:
    loop1: mcp350>pa120.4>ek supreme hf
    loop2: mcp355>2xpa120.3>>ek nb/sb
    22x scythe s-flex "F"

  24. #199
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I would think it would be a good suggestion for RAID-5/6, but not RAID-0.

  25. #200
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by SteveRo View Post
    ...
    For me there appears to be three major variables that i need to combine and optimize -
    1. number of drives to attach (# of acards to attach to the 1231 in my case)
    2. stripe size - 4, 8, 16, 32, 64 or 128k (1231 choices)
    3. allocation (sometimes referred to as cluster) size - when you define your partition (partition magic and other tools let you change this - i have read).

    I have recently discovered that finding the sweet spot of the combination of these three can reap significant improvement -
    pcmv HDD test - 105638 - http://service.futuremark.com/compare?pcmv=327338

    Edit - I am still tweaking.
    Looking at the orb - both Anvil and Mike have higher scores than this one!
    My PB HDD test while doing the Full Suite is ~102-103'.
    I haven't really tried yet , no tweaking at all on my part.
    The HDD test suite score I posted (122590) is not while running OS on the drives, the score drops a bit while running OS on the drives.
    I'll try changing the cluster size, maybe this weekend.
    Still, I've got a feeling I need at least another SF drive in order to compete with mbreslin. (on air that is)

    @mbreslin,
    The advice from LSI looks like a general advice, not for running pcmv tests.
    -
    Hardware:

Page 8 of 12 FirstFirst ... 567891011 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •