MMM
Page 10 of 13 FirstFirst ... 78910111213 LastLast
Results 226 to 250 of 317

Thread: Crapy Raid Performance?

  1. #226
    Xtreme Cruncher
    Join Date
    Jun 2005
    Location
    Lakeland,FL
    Posts
    2,536
    here is what I am getting using 16/16 on nvraid:

    750W Thermaltake Modular PSU
    DFI UT X58-T3eH8
    Core i7 920 @ 20 X 200 1.325V
    CORSAIR XMS3 DHX 4GB (2 x 2GB) DDR3 1600
    768 MB EVGA 8800GTX
    1 X 36GB WD Raptor
    2 X 150GB WD RAPTORS
    1 X SpinPoint P Series SP2504C 250GB
    1 X Maxtor 6L300S0 300GB
    16 X NEC DVD Burner
    7 120mm Yate Loon LED Intake Fan
    4 120MM Yate Loon Exhaust Fan
    28" HANNSPREE Monitor


    Watercooling Loop:

    1 X PA120.3
    1 X PA120.2
    2 X Laing DDC's w/EK-DDC Dual Turbo Top
    7 X Yate Loon Blue LED Intake Fans
    4 X Yate Loon Blue LED Exhaust Fans
    1 X Swiftech GTZ
    1 X GPU EK Fullcover Waterblock
    1 X XSPC Dual Bay Reservoir 5.25" with Bubble Window

  2. #227
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046

    Talking

    MSI NF4 x16
    Areca 1210
    4 x 16MB 74 Raptor

    16KB/4KB, NCQ/TCQ enabled, write/read cache enabled

    so far ive seen avg. read up to 312MB



    so, is this good? lol
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	untitled.JPG 
Views:	1432 
Size:	88.2 KB 
ID:	47601   Click image for larger version. 

Name:	untitled2.JPG 
Views:	383 
Size:	65.6 KB 
ID:	47602  

  3. #228
    Xtreme Cruncher
    Join Date
    Jun 2005
    Location
    Lakeland,FL
    Posts
    2,536
    Quote Originally Posted by NapalmV5
    MSI NF4 x16
    Areca 1210
    4 x 16MB 74 Raptor

    16KB/4KB, NCQ/TCQ enabled, write/read cache enabled
    [so far ive seen avg. read up to 312MB

    so, is this good? lol


    looks great try with write caching enabled but leave rest disabled
    750W Thermaltake Modular PSU
    DFI UT X58-T3eH8
    Core i7 920 @ 20 X 200 1.325V
    CORSAIR XMS3 DHX 4GB (2 x 2GB) DDR3 1600
    768 MB EVGA 8800GTX
    1 X 36GB WD Raptor
    2 X 150GB WD RAPTORS
    1 X SpinPoint P Series SP2504C 250GB
    1 X Maxtor 6L300S0 300GB
    16 X NEC DVD Burner
    7 120mm Yate Loon LED Intake Fan
    4 120MM Yate Loon Exhaust Fan
    28" HANNSPREE Monitor


    Watercooling Loop:

    1 X PA120.3
    1 X PA120.2
    2 X Laing DDC's w/EK-DDC Dual Turbo Top
    7 X Yate Loon Blue LED Intake Fans
    4 X Yate Loon Blue LED Exhaust Fans
    1 X Swiftech GTZ
    1 X GPU EK Fullcover Waterblock
    1 X XSPC Dual Bay Reservoir 5.25" with Bubble Window

  4. #229
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Quote Originally Posted by Grinch
    looks great try with write caching enabled but leave rest disabled
    TCQ, i just cant seem to find the disable button..

    this is wat happens when i change the settings, still have the hard space of 4 but performance of 1.. anyone? weird,

    NCQ + read cache disabled = ~67MB
    NCQ enabled = ~84MB

    read cache, HD tach is too inconsistant to really tell the difference, sometimes enabled is more than disabled, difference of less than 2MB disabled/enabled

    edit: i got TCQ disabled, difference of 1MB
    Last edited by NapalmV5; 05-26-2006 at 05:46 PM.

  5. #230
    Xtreme Enthusiast
    Join Date
    Jun 2004
    Location
    lake forest, CA
    Posts
    787
    Nice STR with those Raptors, seek times are only marginally slower than a single drive too.

  6. #231
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Quote Originally Posted by mesyn191
    Nice STR with those Raptors, seek times are only marginally slower than a single drive too.
    tnx!


    looks like the areca controller only runs @ optimal performance if TCQ, NCQ, read cache are enabled... same story with other areca users?

    idk if nvraid supports or if it was mentioned before but what i noticed is that when i used disk capacity truncation.. i gained ~6MB just from that

  7. #232
    Xtreme Cruncher
    Join Date
    Jan 2004
    Location
    Philadelphia, PA
    Posts
    782
    which driver are you using for the areca, i only see an option to disable TCQ in device mngr, NCQ only in areca bios, and I don't see the same cached read and write I did with the Nvidia raid.


  8. #233
    Xtreme Enthusiast
    Join Date
    Jan 2004
    Posts
    756
    Those of you on nForce 4, have you tried the new 9.34 chipset drivers yet? I'm getting ready to integrate them into a new Windows XP 32-bit install and test them out on my new RAID array.

  9. #234
    Xtreme Enthusiast
    Join Date
    Apr 2005
    Location
    Maine
    Posts
    938
    Quote Originally Posted by Hassan
    which driver are you using for the areca, i only see an option to disable TCQ in device mngr, NCQ only in areca bios, and I don't see the same cached read and write I did with the Nvidia raid.

    Are you using the HTTP GUI? Its in there under system controls > system config.
    The difference between genius and stupidity is that genius has limits.
    - Albert Einstein

  10. #235
    Xtreme Cruncher
    Join Date
    Jan 2004
    Location
    Philadelphia, PA
    Posts
    782
    Quote Originally Posted by Delirious
    Are you using the HTTP GUI? Its in there under system controls > system config.
    Yes I can do it from there, I figured you guys were talking about in device manager like the nvidia raid.


  11. #236
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Hassan, disable/enable TCQ in 'Modify Volume..'


    Quote Originally Posted by Grinch
    Seems to be about right for 74giger....16/16 is really better if you use alot of smaller files like everyday (normal use) surf the web..write emails..word/excel...etc....as the stripe and cluster get bigger you will lose some bytes doing smaller stuff but it is marginal...if you are into heavy graphic rendering and unraring big file then a larger stripe (32,64,128) along with matching cluster sizes will help do those tasks better....I have found that 16/16 shows better benchmarks....personally I use 16/16....looks bada$$ in benchmarks...but if I start compiling and decoding large files again I will move up to 32/32 or 64/64....there is a hella easy way to change the default cluster size without any 3rd party software:

    The "EASIEST" way to change your cluster size is to have a 3rd drive with Win XP on it....here is what you need to do.

    1. Have 3rd drive with Win XP
    2. Go to bios and change boot order to 3rd drive b4 Raid
    3. Once in windows goto "Disk Management" (as soon as you click on it you will have a window pop up and ask you to select drive and it will also want to know if you want to convert drive to a dynamic disk...I always choose no)

    4. You will see your raid drives as 1 BIG drive..now all you do is right click on the drive and click partion...primary partition...set size...now is where you can choose cluster size...you will have 3 boxes to check off...

    5. NTFS or FAT32...Volume Label....Cluster Size......
    6. After you check all of that off then you click off quick format and Voila...you have done it now do same on rest of drives....

    7. once you have setup all your partitons and selected cluster sizes...shutdown PC unplug 3rd drive...change boot order so CD rom will be a bootable device b4 raid device and you are good to go on a clean install....remember once you get into the window when setting up XP you have choices to format drives again and choose where XP get sinstalled ...choose to leave as is...no need to format again cuz it will default to 4k cluster again.....



    let me know how this goes for ....I have been doing this trick now for a VERY LONG time and I know for a fact that this is a fast and easy way...without using any 3rd party software.....
    i did just that and i still get the same i get with partitionmagic : "disk error occured" and i just cant install winxp on 16K cluster, wat am i doing wrong?

    btw, anyone know the code for allocation size to include in the winnt.sif ?
    Last edited by NapalmV5; 05-27-2006 at 04:19 PM.

  12. #237
    Xtreme Enthusiast
    Join Date
    Jun 2004
    Location
    lake forest, CA
    Posts
    787
    Quote Originally Posted by NapalmV5
    tnx!


    looks like the areca controller only runs @ optimal performance if TCQ, NCQ, read cache are enabled... same story with other areca users?

    idk if nvraid supports or if it was mentioned before but what i noticed is that when i used disk capacity truncation.. i gained ~6MB just from that
    I disable NCQ and TCQ, read cache is enabled though.

    Make sure to see if your card needs to be updated too. Areca.com.tw has updated versions of the firmware, BIOS, windows drivers, and HTTPGUI IIRC.

  13. #238
    Xtreme Guru
    Join Date
    Jan 2006
    Location
    Milan, Italy
    Posts
    4,177
    Very interesting thread. It *almost* makes me want to go through all the PITA of having a failure-prone RAID-0... but not quite
    Quote Originally Posted by Bloody_Sorcerer
    flowrate is for losers!
    Quote Originally Posted by MaxxxRacer
    Thermaltake is kind of like AIDS; it won't go away just by ignoring it.

  14. #239
    I am Xtreme
    Join Date
    Oct 2004
    Location
    U.S.A.
    Posts
    4,743
    Quote Originally Posted by creidiki
    Very interesting thread. It *almost* makes me want to go through all the PITA of having a failure-prone RAID-0... but not quite
    I'm using raid5. here's 4x500GB on my areca 1230 card with 1GB cache
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	HDtach4x500GBseagate7200.9_raid5_arecaSIIpciestripe128k_1230_1GBcache.jpg 
Views:	221 
Size:	89.7 KB 
ID:	47665  


    Asus Z9PE-D8 WS with 64GB of registered ECC ram.|Dell 30" LCD 3008wfp:7970 video card

    LSI series raid controller
    SSDs: Crucial C300 256GB
    Standard drives: Seagate ST32000641AS & WD 1TB black
    OSes: Linux and Windows x64

  15. #240
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Richmond, VA
    Posts
    471
    Quote Originally Posted by NapalmV5
    i did just that and i still get the same i get with partitionmagic : "disk error occured" and i just cant install winxp on 16K cluster, wat am i doing wrong?

    btw, anyone know the code for allocation size to include in the winnt.sif ?
    from this thread:

    re: cluster size with XPSP2

    It's a glitch with SP2., only likes a cluster size of 4 (default)
    It should work fine with XP/SP1 and then install SP2 after.
    The x64 Edition trial will let you set up whatever Stripe/cluster combo you wish

  16. #241
    Xtreme Cruncher
    Join Date
    Jun 2005
    Location
    Lakeland,FL
    Posts
    2,536
    Quote Originally Posted by creidiki
    Very interesting thread. It *almost* makes me want to go through all the PITA of having a failure-prone RAID-0... but not quite

    failure prone? I have been running raid 0 for over 7 years and have yet to have a failure...besides I have a 3rd drive for all my important stuff that is not raided...
    750W Thermaltake Modular PSU
    DFI UT X58-T3eH8
    Core i7 920 @ 20 X 200 1.325V
    CORSAIR XMS3 DHX 4GB (2 x 2GB) DDR3 1600
    768 MB EVGA 8800GTX
    1 X 36GB WD Raptor
    2 X 150GB WD RAPTORS
    1 X SpinPoint P Series SP2504C 250GB
    1 X Maxtor 6L300S0 300GB
    16 X NEC DVD Burner
    7 120mm Yate Loon LED Intake Fan
    4 120MM Yate Loon Exhaust Fan
    28" HANNSPREE Monitor


    Watercooling Loop:

    1 X PA120.3
    1 X PA120.2
    2 X Laing DDC's w/EK-DDC Dual Turbo Top
    7 X Yate Loon Blue LED Intake Fans
    4 X Yate Loon Blue LED Exhaust Fans
    1 X Swiftech GTZ
    1 X GPU EK Fullcover Waterblock
    1 X XSPC Dual Bay Reservoir 5.25" with Bubble Window

  17. #242
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Richmond, VA
    Posts
    471
    Quote Originally Posted by Grinch
    failure prone? I have been running raid 0 for over 7 years and have yet to have a failure...besides I have a 3rd drive for all my important stuff that is not raided...
    Hey Grinch, do you think 16/16 will be optimal for 2 x Western Digital SE16 250GB 7200 RPM SATA 3.0Gb in RAID 0? Mainly I'll be gaming, web browsing, and ripping DVDs on the machine.

    I've never used RAID, but I'm going to give it a try w/ my new build thanks to all the great info in this thread.

  18. #243
    Xtreme Guru
    Join Date
    Jan 2006
    Location
    Milan, Italy
    Posts
    4,177
    statistically, every drive you add to a raid-o array decreases your arrays mtbf, because there's no redundancy.

    unfortunately hds die on me with clockwork precision, if i keep them for more than 2 years they croak. must have something to do with the fact that my pc is usully running 24/7 10-11 months of the year. on a 2-dics array that gives me 1 year mtbf which is simply unacceptable =p

    not that it matters, really. i dont give a toss how fast my bf2 levels load... not that i play fps games anyway.

    i really dont see the point in raiding my games drive. yey, it only takes x seconds to boot game x, worth the effort? not for me
    Quote Originally Posted by Bloody_Sorcerer
    flowrate is for losers!
    Quote Originally Posted by MaxxxRacer
    Thermaltake is kind of like AIDS; it won't go away just by ignoring it.

  19. #244
    Xtreme Cruncher
    Join Date
    Jan 2004
    Location
    Philadelphia, PA
    Posts
    782
    Quote Originally Posted by safan80
    I'm using raid5. here's 4x500GB on my areca 1230 card with 1GB cache
    That seems low, I pull 4 x 160G WD RE Drives Raid5 around 160-170 MB/s on an Areca 1210 with TCQ + NCQ + Write Back Cache


  20. #245
    Xtreme Enthusiast
    Join Date
    Jun 2004
    Location
    lake forest, CA
    Posts
    787
    Quote Originally Posted by safan80
    I'm using raid5. here's 4x500GB on my areca 1230 card with 1GB cache
    He is using RAID5, its common for speeds to be that low or even lower. RAID6 might be better performance wise for you, I think you can do with with 4 drives.

    How does your system "feel" now with the 1GB of cache or do you still not have that yet?

  21. #246
    Xtreme Cruncher
    Join Date
    Jan 2004
    Location
    Philadelphia, PA
    Posts
    782
    Quote Originally Posted by Hassan
    4 x 160G WD RE Drives Raid5
    I'm using Raid 5 also


  22. #247
    Xtreme Cruncher
    Join Date
    Jun 2005
    Location
    Lakeland,FL
    Posts
    2,536
    Quote Originally Posted by Kilyin
    Hey Grinch, do you think 16/16 will be optimal for 2 x Western Digital SE16 250GB 7200 RPM SATA 3.0Gb in RAID 0? Mainly I'll be gaming, web browsing, and ripping DVDs on the machine.

    I've never used RAID, but I'm going to give it a try w/ my new build thanks to all the great info in this thread.

    that would be great!
    750W Thermaltake Modular PSU
    DFI UT X58-T3eH8
    Core i7 920 @ 20 X 200 1.325V
    CORSAIR XMS3 DHX 4GB (2 x 2GB) DDR3 1600
    768 MB EVGA 8800GTX
    1 X 36GB WD Raptor
    2 X 150GB WD RAPTORS
    1 X SpinPoint P Series SP2504C 250GB
    1 X Maxtor 6L300S0 300GB
    16 X NEC DVD Burner
    7 120mm Yate Loon LED Intake Fan
    4 120MM Yate Loon Exhaust Fan
    28" HANNSPREE Monitor


    Watercooling Loop:

    1 X PA120.3
    1 X PA120.2
    2 X Laing DDC's w/EK-DDC Dual Turbo Top
    7 X Yate Loon Blue LED Intake Fans
    4 X Yate Loon Blue LED Exhaust Fans
    1 X Swiftech GTZ
    1 X GPU EK Fullcover Waterblock
    1 X XSPC Dual Bay Reservoir 5.25" with Bubble Window

  23. #248
    Xtreme Cruncher
    Join Date
    Jun 2005
    Location
    Lakeland,FL
    Posts
    2,536
    Quote Originally Posted by creidiki
    statistically, every drive you add to a raid-o array decreases your arrays mtbf, because there's no redundancy.

    unfortunately hds die on me with clockwork precision, if i keep them for more than 2 years they croak. must have something to do with the fact that my pc is usully running 24/7 10-11 months of the year. on a 2-dics array that gives me 1 year mtbf which is simply unacceptable =p

    not that it matters, really. i dont give a toss how fast my bf2 levels load... not that i play fps games anyway.

    i really dont see the point in raiding my games drive. yey, it only takes x seconds to boot game x, worth the effort? not for me

    sorry to hear you have had issues with raid..I dont run my machine 24/7..prolly 1 reason why I have not had a problem..but if you have problems after 2 years I would recommend wd raptors which have a 5yr warranty.
    750W Thermaltake Modular PSU
    DFI UT X58-T3eH8
    Core i7 920 @ 20 X 200 1.325V
    CORSAIR XMS3 DHX 4GB (2 x 2GB) DDR3 1600
    768 MB EVGA 8800GTX
    1 X 36GB WD Raptor
    2 X 150GB WD RAPTORS
    1 X SpinPoint P Series SP2504C 250GB
    1 X Maxtor 6L300S0 300GB
    16 X NEC DVD Burner
    7 120mm Yate Loon LED Intake Fan
    4 120MM Yate Loon Exhaust Fan
    28" HANNSPREE Monitor


    Watercooling Loop:

    1 X PA120.3
    1 X PA120.2
    2 X Laing DDC's w/EK-DDC Dual Turbo Top
    7 X Yate Loon Blue LED Intake Fans
    4 X Yate Loon Blue LED Exhaust Fans
    1 X Swiftech GTZ
    1 X GPU EK Fullcover Waterblock
    1 X XSPC Dual Bay Reservoir 5.25" with Bubble Window

  24. #249
    Xtreme Guru
    Join Date
    Jan 2006
    Location
    Milan, Italy
    Posts
    4,177
    Ive got rappies - much too loud, swapping them for samsungs soon. Nice performance but not really worth it imo. Again, i dont really care about how fast levels in games loads, and all the rest of my stuff is stored apps/comics/videos/music

    And yes, thats probably why. Keeping them running 24/7 really ages consumer drives fast
    Quote Originally Posted by Bloody_Sorcerer
    flowrate is for losers!
    Quote Originally Posted by MaxxxRacer
    Thermaltake is kind of like AIDS; it won't go away just by ignoring it.

  25. #250
    Xtreme Enthusiast
    Join Date
    Jan 2004
    Posts
    756
    Quote Originally Posted by creidiki
    statistically, every drive you add to a raid-o array decreases your arrays mtbf, because there's no redundancy.

    unfortunately hds die on me with clockwork precision, if i keep them for more than 2 years they croak. must have something to do with the fact that my pc is usully running 24/7 10-11 months of the year. on a 2-dics array that gives me 1 year mtbf which is simply unacceptable =p
    It depends on how you want to look at it. When you add more drives to an array that means you add an extra point of failure but drives in an array are doing less work then a single drive. Effectively two drives in an array would be doing ~50% their normal workload whereas three in an array would be doing ~33%. Of course those percentages are not perfeclty accurate statistics but the fact remains that an individual drive is reading and writing less data when its in an array.

    Quote Originally Posted by creidiki
    Ive got rappies - much too loud, swapping them for samsungs soon. Nice performance but not really worth it imo. Again, i dont really care about how fast levels in games loads, and all the rest of my stuff is stored apps/comics/videos/music

    And yes, thats probably why. Keeping them running 24/7 really ages consumer drives fast
    You should check into the Western Digital 2500KS 16MB cache drives. They aren't very loud and will offer better performance then those Samsungs.

Page 10 of 13 FirstFirst ... 78910111213 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •