removed
Printable View
removed
Would you recommend using the same Device Manager settings for a SATA 300 GB WD with NCQ? I gained some performance going back to SATA driver and unchecking those boxes.Quote:
Originally Posted by Grinch
safan no selling outside of classies.. please go read the rules and remove your post immediately.
Hassan, hardware raid really is the only way to go if u can afford it. the 128mb of onboard memory you can get on them really helps out too.. great for burst performance.
actually the areca cards that support 1GB of onboard ram are better.Quote:
Originally Posted by MaxxxRacer
1Gb is a bit excessive dont you think? I mean if u have the money, why not, but otherwise its a bit much.
I'll try to find the review of the areca cards with different levels of onboard ram. it's very much worth it.Quote:
Originally Posted by MaxxxRacer
edit: found it take a look of the whole review
http://tweakers.net/reviews/557/1
I pretty sure even the $500 areca controller with 8 sata2 ports still only has 128mb, all of thier controllers have the ability to expand to 1gb.
http://www.areca.us/products/html/pcie-sata.htm
unless you need 8x ports your can get the 4x port model for 330.00 at newegg.
Still alot but its worth it imho.
As well as haveing memory it also has its own Intel IOP332 I/O processor, so basically takes the whole load off of your system. If your goin to run more than 2 drives in raid on the nvidia controller your better off getting a controller card, cause after 2 drives cpu utilization goes way up.
Edit: its only the 12-24 port models that have the expandability option, and for a home user spending $1000.00 on a raid controller is nuts when you really wont tap its full potentiall unless your supporting multi users.
edit: just read all 32 pages of that review, very in depth and a good read.
Quote:
Originally Posted by Trice
yes ...:woot:
Ok i couldnt take the temptation anymore, i broke down and bought the 4 port areca 1210 controller and two more seagates for a 4x raid 0 array.
Only reason i did is of the 6 month no payments from the egg. And ill be selling the two 36gb raptors i have.
Benchies will be posted in 2 days!!!
Hahah, awesome.
You'll love that card, but watch out, booting it on the DFI UltraD can be a PITA.
yah that what im thinking, i already have both slots running at 8x with video card in the bottom slot, so im thinking i already have half the work done to set it up.Quote:
Originally Posted by mesyn191
I'm still waiting for my payment to post to my account.. I have two controlller cards a 3ware 8506-8 with 6x250GB in raid 5 array and the areca 1210 with 4x500GB in raid5. This is in my file server. I knew went i set it up I would have to make a controller card upgrade to add more space and newegg has the 12 port areca pci-e for only 729.00 plus $100 for a 1GB dimm. the card used to be about $800. I figure if AM2 really is crap as it's looking to be I'd buy more 500GB drives at $280 each. the cool thing is newegg is finally carrying the the 16x pci-e card but its $929 :(Quote:
Originally Posted by Delirious
from what I have read and heard a pci based card is limited to pci bus and onboard native sata/raid is faster...that is about the time the kt400 motherboards started coming out..
read the link posted above. pci-X and pci-express controllers are both faster than standard pci and onboard controllers.Quote:
Originally Posted by Grinch
Quote:
Originally Posted by safan80
ok cool did'nt see that..:woot: but what a HUGE price....unless you are running a server...just dont see the need...but that is just me...besides what if you want to run SLI vid cards? :toast: :woot:
highpoint makes 4x pci-express cards and the a8n32-sli has one of these slots and so do some dfi boardsQuote:
Originally Posted by Grinch
http://www.newegg.com/Product/Produc...ice=&maxPrice=
the price is $142-$260
That avecea controller seems like working as a ramdrive. I'd buy a 4 port version but I have no place to put it in my motherboard.
Been reading through the whole thread twice as it's so interesting then downloaded HD Tach to test my drive a Hitachi 250GB Sata 2.
Here's the problem I am getting massive changes in readings and I have not changed anything at all. Burst speed shows anywhere from 90 MB/s to
220MB/s the average read from 53 MB/s to over 90 MB/s.
I am not using RAID this is with just one hard drive, I do have two and I was going to RAID them and see how much extra they give but with such massive differences I'm not sure if I would know if it were better or not.
Please can anyone help to explain what could be happening, I am changing nothing just run the tests and every time the results are way different. Could the Hard Drive be failing?
Thanks.
don't have anything running in the background that could be scanning the hard drive or using it when you run the program.Quote:
Originally Posted by kimandsally
Just tried it and still the same, there are massive changes in the graph like it goes down from from 70MB/s to less than 5MB/s inbetween 80GB and 120GB as well as being totally different, I just can't get anything to be close to a repeatable score, do hard drives do this before they die?
PS I did a clone of my system using Acronis then reinstalled XP so it was totally clean and it still does it??????
Thanks for the ideas please keep them coming.
run a defrag, type this into run rundll32.exe advapi32.dll,ProcessIdleTasks and watch your processes until they are down to 0%cpu, usually the hourglass will disappear also and close any other running programs, you'll get more consistent results
I would also try and reinstall the nfroce4 chipset drivers...and use the tweaks above
He is probably looking at the burst speed and stuff, which is effectively useless as it can only do that from the cache on the hard drive, which at the most is only 16MB.
You want to look at the sustained read/write speeds and seek times.
Hi there still crap performance,
http://img177.imageshack.us/img177/5287/hdtach5xo.jpg
Any idea whats happening?
I wouldnt put stock in HDtach, ATTO is a much better program. I can use hdtach aren run multiple tests with nothing in the background and still the test will fluctuate by 10+/- avg. read points.
I tried to open up your picture kimandsally but i get nothing?
What are all those open programs u have running? that will definetly kill the test. And i tlooks like u still have a few background processes running.
It shouldn't be doing that...Quote:
Originally Posted by kimandsally
something is either wrong with the hard drive, your drivers, the cable, or your windows installation.
Wow. Try to close all the programs you have and run the test again. It might not be much better though. Your random access time is way up at 22ms compare to typical 10k rpm drive at 8ms and 15krpm drives at 5ms.Quote:
Originally Posted by kimandsally
If closeing all open programs doesnt work then i would just go for a clean install of windows and install the proper drivers correctly.
And definetly turn off AVG and ALL the other background stuff u have running.
That was my first thought, either an anti-virus scan or defragmenter running in the background, something else accessing the driveQuote:
Originally Posted by Delirious
Just for fun, here's the new 74GB Raptor (16MB cache) vs the old one -
http://img253.imageshack.us/img253/9...ache4mk.th.gif
Ever considered running a S.M.A.R.T. scan on that drive or using Hitachi's diag tools http://www.hitachigst.com/hdd/support/download.htm
Thats pretty impressive for a single drive.Quote:
Originally Posted by IvanAndreevich
Hi, BIG thanks I just did a reinstall of Windows and turned everything off and ran it twice now the results are MUCH better here's both of them;
http://img80.imageshack.us/img80/815...tach0sf.th.jpg
Thanks for helping this was a little awkward and I've learn't something here, I think I'll try different RAID at the weekend.
That's much better :)Quote:
Originally Posted by kimandsally
Quote:
Originally Posted by kimandsally
Thats good, looks like its running like it should.
Your a great bunch on here, nice to have help from people in the know some forums just arn't as helpful as this thanks to you all :-)))
Good to see you got your problem resolved, now if I could only get mine worked out....
I just did a quick read through of the benches on that areca card... all I can say is holy crap.. Gimme gimme!
yep 128MB cache vs 1GB cache There's a big difference :) Thing good about buying a raid controller with lots of ports and pci-express is that it'll last longer than any other computer part you will buy. I call it investing in the future.Quote:
Originally Posted by MaxxxRacer
Good tip on disabling read caching and command queuing. Results before and after.
http://img163.imageshack.us/img163/7523/hdraid1xe.png
http://img227.imageshack.us/img227/2650/hdraid13dd.png
Quote:
Originally Posted by kimandsally
what is stripe and cluster size? if I had to guess i would say that stripe size is over 32K
edit:
looks like you got it in order...
Got my hard drives back from Hitachi. Seems better, but latency sure increased.
Is everyone that is useing DFI boards useing the latest bios for thier board, the one with the updated nvraid rom? If so have u noticed a difference?
The 1210 comes tomorrow! but ups is delivering it so that means a 90% chance it wont come till late afternoon:mad: I might have to go out driving looking for the united parcel smasher man.:DQuote:
Originally Posted by MaxxxRacer
Ok card came yesterday and aparently according to areca's website it works fine for them in DFI boards, well it doesnt, it only works in the bottom 16x slot when its in 1x mode. Performance with same drives and same config on the areca is worse than the onboard.
Other wise this is a very nice card with very nice feature set, im sure if it worked in 8x then it would scream. The manual is 160 pages and well written, comes with a cd that allows u to boot from it to create raid driver floppies.
Newegg doesnt give refunds for this item either, some how i missed that.
Bummer dude. I have missed that notice a couple of times myself. I was liking this card until your post :(Quote:
Originally Posted by Delirious
Quote:
Originally Posted by thinkingbear
Dont get me wrong, its an awesome card, it seems to be an issue with most dual 16x slot motherboards, basically the slots only accpet video data on them unless they are run at something like 1x,2x.
There is a 4x slot on the dfi, it would work, just would have to cut the end out of it for the card to physically fit, then theres a matter of the floppy power header being in the way, figures.:rolleyes:
Cut it dude, the Promise card that use the same Intel Proc is a 4x, how is the floppy conx in the way? Heatsink?
Like mine, lolQuote:
Originally Posted by Delirious
lol again, I don't think any mobo manufacturers planned on anyone trying actually use those slots. I have a PowerColor PCI-E card that I had to bend 25° to make it fit around the heat pipe on my A8N32, and it blocks completely the release on the top graphics card.Quote:
just would have to cut the end out of it for the card to physically fit, then theres a matter of the floppy power header being in the way, figures.:rolleyes:
From what i heave read though there are several people who run thier raid cards in the 16x slot with various motherboards, i think im goin to perform the sli mod on my ultra-d as people with the sli-dr dont seem to have this problem.Quote:
Originally Posted by thinkingbear
To top it all off i woke up this morning to an empty 200gb backup drive that used to be full of backups. Some how the data just walked off, its like someone came in while i was asleep and deleted everything on it. the partition is still there just no backups. I feel sick just thinking about all the stuff i just lost.
2x Hitachi 80 Gig SATAII on Epox 9NPA+ Sli, Opteron 146- 2750 MHz
http://members.cox.net/mucker/Hitachi.jpg
74 Gig Raptor on DFI nF4 Ultra-D, Opteron 146- 2800 MHz
http://members.cox.net/mucker/Raptor.jpg
Here is one test with raptors in raid versus a single raptor (at the end)
http://forums.storagereview.net/inde...4&#entry221874
gamepc's review is much better they not only test 74GB and 150GB as single raptors and test them 4x raid they use an areca 1220 pci-express card.Quote:
Originally Posted by Esso
http://www.gamepc.com/labs/view_cont...150raid&page=1
Im almost ready to drop $600 on one of those controllers and grabbing 5 drives. Dunno if i would go with Raptors though. I just dont need that much space on my C drive is all. But the performance is incredible:toast:
Quote:
Originally Posted by Haltech
I just ordered an areca 1230 and once I get it and everything setup I'm selling my 1210. if I were you I would go with 6 drives and run raid50 if you don't want the space raid 50 is two raid 5 arrays with the performance of raid0 but it has redundancy so you wo't lose your data.
Ok so if you have an ultra-d and u want to use the 16x slot on top for the raid controler YOU HAVE TO PERFORM THE SLI MOD. Took me 15 min to do it, put everything back together and now it works perfect. Needless to say i have a big grin on my face right now:D
Im trying to figure out the best stripe to cluster size for 4x160gb drives in raid 0, 16/4 gives crappy performance for what it should be.
Any suggestions?
Do the next step up which is 32/4. Should be better..
64KB stripe is perfect unless you're storing lots of large files (in which case go for a larger cluster size) or lots of small files (in which case go for a smaller cluster size).
Glad to hear you got your issues worked out.
I had an UltraD that I couldn't mod so I switched to a Gigabyte mobo in order to run my Areca 1210, not anywhere near as many OC'ing options as the DFI board but it just plain works and was cheap so I'm happy. Getting the itch to upgrade again with all you guys talking about buying the 1230 and looking at those benches with the 1GB version....
You mean a 32k stripe and a 4k cluster right?Quote:
Originally Posted by Mike6969
I would try 16k stripe and 4k cluster. It looks a little less than what it should be, with 2 of those raptors u should be around 100mbs average read.
change to 16/4 all will look better:woot:
How can you install Windows XP on a partion with a cluster size of 16K?
I tried formatting the partition with XP (C:\WINDOWS\system32\diskmgmt.msc) and with Partition Magic (both in XP and on a bootable floppy).
But or Windows Install say's that the partition isn't a good NTFS partition(You have to reformat), or "NTLDR missing"/"Disk write/read error" (this is when I installed Windows XP on a IDE harddrive and formatted the S-ATA harddrive to 16K, installed Windows on the S-ATA harddrive an disconnected the IDE).
Any Idea's?:)
Is there a Windows program todetermine the stripe size?
Stripe size is determined when you create the RAID array, you choose it just before you make one.
NTLDR is on the IDE, if the IDE is Disk 0, and Sata is Disk 1, when you install and then remove the IDE, NTDLR and boot.ini was copied to the IDE as it is the first boot drive. Try installing to the SATA with the IDE removed. Should be OK :)Quote:
Originally Posted by Vassili
re: cluster size with XPSP2
It's a glitch with SP2., only likes a cluster size of 4 (default)
It should work fine with XP/SP1 and then install SP2 after.
The x64 Edition trial will let you set up whatever Stripe/cluster combo you wish.
snap! dats wat ive been waiting for - 74gb raptor 16mb.. yay! 300MB+/s shall come to pass.. for me atleastQuote:
Originally Posted by IvanAndreevich
Ivan, where did u get urs? don tell me its an ES.. LOL :) cant find any online..
I forgot that, thanks!Quote:
Originally Posted by Hassan
Windows XP -non SP1/2 did the trick, thanks! You made my day.:)Quote:
Originally Posted by soundx98
Actually, I think it's different for different revisions of the board. I had no probs running my Promise Supertrack 8350 in either PCI-E slot on my previous Ultra-D and when I installed a Rev. AD0 board, I lost functionality in the 16x slots for the card. Not a big deal as the Promise is only a 4x card so I just moved it up to the top slot.....Quote:
Originally Posted by Delirious
Hey, glad to see all worked out! :toast:
Quote:
Originally Posted by mesyn191
backwards 64K stripe is perfect for larger files and 16K is perfect for average day to day stuff..with 64K you get more waste of byte size..:toast:
I have a situation.
I recently revived my SCSI setup which has been laying on a dusty shelf since I sold the parts of my old rig.
The SCSI setup consists of LSI Logic LSI213200-R ultra320 scsi adapter (pci-x) and two 36.7 gig 16mb cache 15KRPM Hitachi ultrastar hd's on raid
Because my mobo doesn't have a pci-x slot, I installed the adapter on a pci bus (adapter is backwards compatible) so I know it's limited from the start.
Here's the HD tach scores of two setups (scsi vs current raptor setup).
http://img212.imageshack.us/img212/8...ewindow0jp.jpg
As you can see, scsi setup has a very low RA time and CPU utilization while quadruple raptors setup is superior on average read.
So what are my options here?
1- Keep the windows and games in scsi setup and use raptors as a very expensive back-up/storage.
2- Sell the raptors and get two, four or, what the heck, six more of those little-fast hitachi drives ($187-piece) and enjoy scsi action
3- Get rid of the scsi setup all together since there is no way I can get the same performance as my current setup gives even if I add more disks on the chain due to PCI-bus limitation.
4-other
I would say option#2 if you had a pci-x slot but since you don't option#3
how about an option 4 get a dual cpu motherboard with pci-x and 2 16x pci-express slots. You could run dual dual-cores.
http://search.ncix.com/displayproduc...tern%20DigitalQuote:
Originally Posted by NapalmV5
what board has 2 pci-e 16x and pci-x?Quote:
Originally Posted by safan80
http://tyan.com/products/html/thunderk8we.html ;)Quote:
Originally Posted by Hassan
I spaced for a sec, I read the post and saw 940 but I asked because I'm looking for a 939 server board with pci-e and pci-x and I didn't see a supermicro or tyan board that did
s939 server board? :wth:Quote:
Originally Posted by Hassan
well I have a few extra Opty 170 939's and i was going to get the Tyan 2865G2NR http://www.tyan.com/products/html/tomcatk8e.html for a customer. It was between that or the Supermicro H8-SSL-R10 http://www.supermicro.com/Aplus/moth.../H8SSL-R10.cfm. One has pci-e the other has pci-x
Quote:
Originally Posted by LexDiamonds
I think that is far from the truth that that plainly helps. Its very dependant on alot of factors and not a good "rule of thumb".
Obviously, there is no "optimal stripe size" for everyone; it depends on your performance needs, the types of applications you run, and in fact, even the characteristics of your drives to some extent. (That's why controller manufacturers reserve it as a user-definable value!) There are many "rules of thumb" that are thrown around to tell people how they should choose stripe size, but unfortunately they are all, at best, oversimplified. For example, some say to match the stripe size to the cluster size of FAT file system logical volumes. The theory is that by doing this you can fit an entire cluster in one stripe. Nice theory, but there's no practical way to ensure that each stripe contains exactly one cluster. Even if you could, this optimization only makes sense if you value positioning performance over transfer performance; many people do striping specifically for transfer performance.
http://www.storagereview.com/guide20...erfStripe.html
I find storagereview's guides and educational material very helpful.
Finding a good stripe/cluster size for your setup is the hardest part, i tried pretty much every combination and they all tested the same give or take a little, so i just gave up and use 64k stripe and 4k cluster, which i think storage review recomended for everyday usage in a desktop.
I think stripe/cluster choices come into play more in a server than our desktops.
Quote:
Originally Posted by epion2985
Yeah that would be great but buying that mobo means shelling out 500 bucks plus the 940 cPU plus ECC ram. With AM2 and Conroe coming out not sure it's the smartest idea investing big money on a system like that.Quote:
Originally Posted by s7e9h3n
here are some pics of 16/16,32/32,64/64...in that order:
http://s2.supload.com/thumbs/default/16_16.bmp
http://s2.supload.com/thumbs/default/32_32.bmp
http://s2.supload.com/thumbs/default/64_64.bmp
ummm am2 or conroe? ECC DDR2 is not that prevalent yet and am2 or conroe aren't multiprocessor, maybe multi-core, but you can't even compare those platforms with what s7eph3n reccomended. Moreso, I'm not going to tell a paying client to wait 3 months for something else. But hey that's me.Quote:
Originally Posted by SamHughe
Anyway my Areca 1210 should be arriving today with Battery Backup, I'll post before and after with 4 Raptors 36G raid0 nvidia vs areca.
I get this with DFI CFX3200 and 2 X Raptor 150GB in raid0 16k/4k controlled by ULI M1575:
http://koti.mbnet.fi/mtrepo/HDTach/H...ULIECached.jpg
That was with X2 4400+ @ 10 X 280MHz. The HDD performance seems to increase consistently with decreasing cpu/HT speed. With 10X180 underclock i get this:
http://koti.mbnet.fi/mtrepo/HDTach/H...underclock.jpg
PCMark05 tells the same story. 7.4k @ 10X310, 8.1k @ 10X220.
:confused: What is going on here? Help appreciated.
well Hassan if you can beat this HD tach I'll be surprized. I took this when I installed my 1230 with only 256MB of cache (the 1GB is on the way). this is with seagate 4x500GB using raid5. now i can finally get rid of the 1210. the volume name is the same because I originally made the array on my 1210.
I'll see, just waiting for the ups man right now!!
Quote:
Originally Posted by Hassan
Looking forward to that BIG time!....:woot: :clap: :banana:
Quote:
Originally Posted by Matuzki
Thats definately strange...I would try and contact: me2@georgebreese.com
george helped me out years ago when the kt400 chipset came out...he designed a few various raid patches that fixed alot of glitches...
http://www.georgebreese.com/net/software/
this is where I used to go to download tests and what not but he is the man when it comes to raid!:woot:
I second to that. I'm also waiting my HighPoint 2310 (PCI-E x4) sata controller card to arrive. I'l post my results with 4x 150gb raptors on this card (if everything works ok).Quote:
Originally Posted by Grinch
What stripe sizes and cluster sizes are you guys using with your Windows XP arrays? I've been digging up some information cause i bought another WD2500KS to set up in a RAID0 array.
This is what i've found so far:
http://discuss.futuremark.com/forum/...&fpart=13&vc=1Quote:
I would like to update this RAID0 guide with my experiences, I have used RAID for a few years now, and have no problems at all.
Upon installing WinXP you will find the average file size to be 373kb, the general rule is correct, you divide the file size by two, and go to the next lowest setting:
373/2=186.5 next lowest 128k
I tried experimenting with file sizes and found the following:
16k, seek errors quadrupled, slow file access, slow loading times, and HD Tach Benchmarks showing inadequate HD performance, circa 37MB/sec sequential read (the speed of one of my single drives) I rechecked the settings, I had indeed made a RAID0 partition, but I figured the problem lay with the file size and accordingly RAID structure.
32k, Seek errors halved form previous setting, OS was faster, loading times reduced, benchmarks improved etc, but still not as fast as my previous default of 64k file size.
I had already tried the 64k file size, and had been having what I had thought good results at HDtach showing 60mb/sec sequential read, few seek errors, good loading times etc, so I tried 128k
128k, OMGWTF, load times blisteringly fast, HD Tach Benchmark at 80MB/sec sequential read, burst at theoretical max of 92MB/sec (ATA 100), games load times cut in half, hardly any seek errors... like I said OMGWTF
So naturally the urge to fiddle further had caught me.......time for 256k/sec
256k, Load times same as 128k, HDtach showed 78MB/sec sequential read, but.........HDD drop off occurred a lot earlier, which is slower than 128k, and here is a kicker, burst speed 85MB/sec, and the seek errors rose quite a bit, so I did the only thing that was left to do, went back to 128k.
Filesize is critical with RAID 0, a standard brand new WinXP installation has an average filesize of 373kb, so the RAID stripe size should be 128k for WinXP, this should be an industry standard, but as per normal things like RAID take a few years to catch up.
But RAID Results should also not be done on a full drive, that can lead to spurious and false results, if you ran a test on a full drive, received results as 2MB (mentioned previously), and then you install a RAID 0 with 2MB stripes, then you have lost a whole wad of your hard drive, as each strip is for one half of one file only, that is why you should only use a standard and new installation to find the OS stripe size, as any files that are added later will conform to the stripe size set, and having the wrong size will cause HD space wastage/overtaxing, which is not really good, and you will not receive the performance you should get.
So my advice is, if you are using WinXP make your stripe size 128k, and leave it as that until the next new OS arrives.
All these tests were done on a Silicon Image standalone RAID card with a Sil 0680 chipset, with the newest BIOS and drivers.
What cluster size would you use with a 128k stripe?
In the first 2 pages of this thread, cluster/stripe sizes are mentioned in detail.Quote:
Originally Posted by burningrave101
I already know what strtipe size i want to try. I'm just wondering what cluster size to use with 128k. The first two pages just suggested using the same cluster size as the stripe size. I dont think that will work as well for 128k.Quote:
Originally Posted by Haltech
http://www.storagereview.com/guide20...erfStripe.htmlQuote:
For example, some say to match the stripe size to the cluster size of FAT file system logical volumes. The theory is that by doing this you can fit an entire cluster in one stripe. Nice theory, but there's no practical way to ensure that each stripe contains exactly one cluster. Even if you could, this optimization only makes sense if you value positioning performance over transfer performance; many people do striping specifically for transfer performance.
So what cluster size would be best for a larger stripe size like 128k?
EDIT: Here is some more information on why i think 128k stripe will offer better real world performance.
http://www.overclockers.com.au/artic...?id=179581&P=2Quote:
Intel is offering to arrange the RAID 0 array with 4, 8, 16 32, 64 and 128 K stripe sizes. They are recommending 128 K for best application performance. Intel is also explaining that a smaller cluster size than 128 K gives a better transfer rate. We found out that this is exatly true. As shown in the attached screenshot, RAID 0 based on the ICHR-5 shows the best serial transfer rates in HDTach and Sandra with a stripe size of 4 K, exactly as predicted by Intel. However and much more important: as shown by Winbench 99, the best performance with applications is achieved with a stripe size of 128 K.
It is widely understood that the disk transfer rate is indicating HDD performance and many users indeed take it as THE sole benchmark for HDD performance. Well, it seems that this can be a misleading approach. As shown by our Winbench 99 test results, the best performance with applications is achieved with a stripe size of 128 K - but that is returning a significantly lower transfer than a 4K stripe size. This shows that the transfer rate can be a misleading indicator for disk performance and should not be taken as the sole HDD benchmark. Applications are accessing the file system on the disk in a different manner to how serial transfer rates are tested. In the later case the disk header is reading and writing either a fixed file size or a small number of files sizes to and from given spots with in a linear fashion from the beginning of the platter to its end. That is not how applications are accessing the file system on the HDD. In this case files of continually changing sizes are read, written and moved in almost random patterns.
http://forums.cluboverclocker.com/ar...hp?t-2020.htmlQuote:
the stripe size is the amount of data that each hard disk has to read or write at any one time.
So in a RAID 0 with a 16K stripe, the first drive will be sent 16K and immediately thereafter the next drive will get the other 16K.
When working with small files, it is often best to use a smaller stripe size because:
1. it saves space (because if you have a 4k file, it fits within 16K and the other 12K is wasted).
2. You can grab a lot more small files faster if you don't have to read a large stripe size (like 128K, which is 8 times bigger).
3. A smaller stripe size can give better sequencial transfer rates for a single task, i.e. it shows up well in benchmarks like hdtach.
When working with large files:
1. A larger stripe size means less time is wasted going in between each drive requesting the next cluster of data. So if you are editing a multi-gigabyte file, it's easier on the hard drives to have a larger stripe size.
2. A larger stripe size usually gives better system performance in multitasking situations (benchmarks that open many programs at once show improved performance)
Recently I have been using 128K and my multitasking ability is better than when I was on 16k. For example, I can upload @ 2MB total per second divided between 200 different sources on emule, while I still watch a 6MB per second mpeg2 tv recording that I made using my tvcard. That kind of activity is really tough on a single hard drive and probably only possible because of RAID. Imagine having 200 different people requesting a random portion of a random file while you also use the same hard drive to run the OS and watch high quality tv.
With the latest corruption of my RAID, I think I am going to go back to 64 or 32k and try it out, because the transfer rates were slightly higher when I did dvd editing. i.e with a 128K i would get 500fps in processing a dvd (writing it back to the hard drive after editing it) and with 16k i remember getting somewhere around 600-700fps. although my overall system performance was slightly slower.
to sum it up, it depends what you plan to be doing with your computer. It also depends on the type of hard drive you have. Some hard drives work better with different cluster sizes. RAID stripe sizes can be anywhere from 8k to 4MB depending on your RAID controller's capability. My own controller has 16k, 32k, 64k, and 128k.
The only game i play is Elder Scrolls IV: Oblivion and with those large .bsa files i would think a larger stripe size would offer the best performance.
larger stripe size will do better when dealing with mostly large files...if it were me I would match the stripe and cluster...128/128
I would HIGHLY recomend that you read this entire thread...this was all benched and tested a long time ago:
http://forums.pcper.com/showthread.php?t=267729
also read this:
http://faq.storagereview.com/tiki-in...age=StripeSize
So far, my burst speeds have doubled but my sequential read speeds are pretty much the same. Ironic as everyone pointed out that my speeds were bus limited and a hardware controller would free them up. I went as far as to mod my ultra-d to sli as to enable x8 on the second pci-e slot as opposed to x2 in ultra mode to eliminate bus saturation issues. I'm going to try different stripe sizes and maybe even raid 5. If I can switch to raid 5 without significant changes in speeds maybe I will fo that route. My only assumption is that first gen raptors are no match for current sata II drives with high densities and they are no match for current gen raptors. i have tried enabling and disbling TCQ, setting write-back vs write thru, and minor differences, i will try different stripe/block as I was at 16/4, before and after. Regardless, burst has doubled, cpu util is still near zero on both, RA is same, slight improvement with areca both on same bench. I will also test write speeds and test with other benchmarks and raw copy:fact:
For real world performance i would suggest a bench like Winbench99 instead of HD Tach. Huge scores in HD Tach doesn't exactly mean the best performance in real world applications.
QFT....IOMeter is another good program to test true HDD performance....Quote:
Originally Posted by burningrave101
Thanks Grinch for trying to help me. istm this George fellow has been active in something else lately so i prolly won't bother him.
Been playing around my problem some more now. It is not directly related to raid. I get the same behavior with a single drive too. And with the Sil3114 controller. It is not about cpu speed as such either...decreasing multiplier has no effect. So HTT gives the main contribution (not surprising) though mem multiplier seems to have some effect too.
So i boot with high speed to win, decrease HTT with ClockGen and my HDD speed gets a big increase. Got 9k PCMark05. Was enough to get ninth place in ORB. And with 4400+ @ 1.7GHz. :D
Tells one or two things about this mark...
try running Atto
using an older version of Atto that only did 32MB total length this is what I got with my 1210 and 4x500GB seagates 7200.9s
now 32MB total length with the 1230 256MB cache. I will do an update of atto when I get the 1GB of ram to use as the cache.