Hi,
I have 4 WD Raptors 740GD. Now, my question is, how should I RAID them, and on which controller? I want maximum performance.
This is on a DFI Lanparty nF4 SLI DR Expert mobo.
TIA
Printable View
Hi,
I have 4 WD Raptors 740GD. Now, my question is, how should I RAID them, and on which controller? I want maximum performance.
This is on a DFI Lanparty nF4 SLI DR Expert mobo.
TIA
Put it on the nvidia controller. IMHO, it performs better than the sil.
Just use RAID0 on the Nvidia and make sure you cool these puppies as well. ;)
Use it as I use my Seagate: 2 in raid 0 for Windows and 2 in raid 0 for Backup...
I have an external backup disk ;)
Running the setup SillySider posted, it rocks! For some reason, burst rate with Sil was around 100mbs, now its over 200 w00t!
I dont think that to backup data on RAID-0 drive is a good idea...
I like the Raid 0+1 idea if you can afford to lose half the space.
I have some interesting results for you that need to be taken into consideration before you make up your mind.
This is a picture comparing a 4x400gb raid0 array. They are a matched set of Western Digital RE2 sata drives.
In Red is an offboard PCI-X 133 Adaptec SATA/SAS Raid controller. Not the best, by any means, but its well built and fast. In Blue is the onboard nVraid.
http://home.comcast.net/~butterfry/h...r0.compare.jpg
That image basically speaks for itself. The nVraid controller can't handle the data. Also notice the CPU usage on the nVidia controller, that is what you're loosing to keep that stripe running even IF your board design can handle it.
Here are the same drives only with 2x400 this time.
http://home.comcast.net/~butterfry/h...r0.compare.JPG
Can't get much more identical.
YMMV, of course, but this is a dual core dual processor NF Professional based board. If anything has the bandwidth I'd guess this does.
I'm currently writing an expose` on someting we all forget to consider, the drive controller. Chipset manufactures just aren't building what we need for high load systems. With the price of GB per dollar at an all time low, many people are investing in huge ammounts of storage without taking into consideration the performance implications of all that data trying to get sucked through a straw. Gentlemen, we need fatter pipes.
-butter
butter,
How about 4x400 on the ULI 1575 SATA?
all my results can be found here, but read my disclaimer at the top. all the uli results are valid though.
http://www.flickerdown.com/forums/showthread.php?t=243
I'll do the 4x400 uli in a bit. i need to do a windows install on a different drive to get accurate results. And since i'm running a 2x160 raid for boot on the uli currently I can't just plug the 4x in.
If you say that have an external disk for backup, then stick the 4 raptor on raid 0... :eek:
This question is useless without telling us what exactly you want to have speed up ;)Quote:
Originally Posted by D_o_S
Onboard SATA won't give you great performance anyway.
Just do the math. That chipset has 32 PCI Express lanes.
16 lanes are used for the SLi (8x each or 16x single)
4 lanes are used for PCI-Express x4 slots (utterly dumb design decision for a useless slot)
1 lane is used for PCI-Express x4 slots (another dumb design decision for another useless slot)
8 lanes are used for chipset intercommunications
That leaves you 2-3 lanes PCI, USB, onboard sound, onboard GbE, and SATA RAID, with total bandwidth from 500MB/s to 750MB/s (250 MB/s per PCI-Express lane). GbE is 128MB/s, PCI is 33MB/s (99MB/s total), and each raptor has probably sequential read speed of 60MB/s and burst read speed of 100MB/s.
Therefore if you RAID0 4 raptors, your total available bandwidth will be eaten alive and be extremely limited and will not scale 4x. There is simply no reason to use RAID0 beyond 3 drives as your marginal performance increase becomes null. Besides, your RAID0 drive will die 4 times as early due to serial links in MTBF.
The option will be RAID 0+1 or RAID5.
It's pointless to RAID beyond 4 disks with an onboard controller (and without PCI-X 133 slots). It's like asking for professional HD audio sound quality from onboard audio or SLi 3D Gfx from onboard integrated Gfx. It ain't going to happen.
I guess maybe my screenshot of what happens to an nVraid at 4x falls on deaf eyes.
Edit: but if you've got the dough to blow, i've got a controller that can handle it.
ooo yaQuote:
Originally Posted by butter_fry
didnt see that.
a picture is worth 1000 words. :toast:
4 drive RAID0 is stupid anyway as you will at least need a dedicated PCI-X 66Mhz slot to feed the bandwidth. Most AMD overclocking motherboards suck :banana: for a decent RAID storage system.
notice the RED stripe pci-x 133 8way SAS/SATA. no problems there with the exact same drives.
Ya, but the thing is onboard controller will only have enough bw for 2-3 drives of mediocre performance raptors, so any RAID w/ 3 drives or below should be fine.Quote:
Originally Posted by butter_fry
its about sequential/burst read/write speed and bus bandwidth. that's what sucks for AMD based deskstop/workstation motherboards; not a lot of PCI-X options available. :(
Or in the language of overclockers, its like putting DDR2 1000Mhz memory into a motherboard with 133Mhz FSB. It ain't gonna work.
Quote:
Originally Posted by vitaminc
Sorry your math is all wrong.
First, the SATA on the NVidia chipsets sits directly int he southbridge, I don't think it actually needs PCIe lanes.
Even then, PCIe x1 bandwidth is 500 MB/sec (byte, not bit).
That is plenty even for 4 raptors. A single lane.
Apart from that, if your applications get a speedup out of RAID-0, then 180 MB/sec are already very nice to have.
It doesn't "need" the PCIe lanes, but its using those bandwidth to communicate/emulate. NVDA uses 8 or 16 PCIe lanes to link the north and south bridge due to their SLi design decision/flaw.Quote:
Originally Posted by uOpt
CPU -> NB -> SB -> SATA
So your SATA bandwidth and CPU/memory load will be impacted significantly when both SLi card #2 and SATA are both sucking your bandwidth out. And the computational power limitation for NVDA's chipsets to handle all USB, SLi, SATA, PCI, and PCIe traffic routing all at the same time.
There is a good reason why Nvidia SLi/gaming chipsets are rarely used in storage systems and are getting designed out by Braodcom.
I will go dig up some schematics. But both Nvidia's nForce4 SLi and SLi16 chipsets are not really designed for workstation level storage systems.
http://en.wikipedia.org/wiki/List_of....28internal.29Quote:
Originally Posted by uOpt
Looks like 250MB/s to me. I don't feel like arguing against Wikipedia. Even if you convert 2500Mb/s into MB/s, its still only 312.5MB/s, not even enough for Ultra320/FC.
Uh, 180MB/sec STR? That's quite impossible with Raptors with Nvidia's SoC SATA. You cannot expect STR to add up. i.e. 4 HDD with STR of 60MB/s in RAID0 will not add up to 240MB/s.Quote:
Originally Posted by uOpt
*sigh* again it looks like my screenshot has been ignored.Quote:
Originally Posted by uOpt
uOpt, I showed you what happens when you put 4x raid0 on the nvidia chipset.
The PCI Express bus is bidirectional, therefore its 500MB/s total ;)Quote:
Originally Posted by vitaminc
And they willl probbaly choose to believe 4xRAID0 raptors will yield better results because, uh, your test is not using Raptors (judging from the name and the random access time).Quote:
Originally Posted by butter_fry
RAID0 beyond 2 drives is simply retarded and doesnt yield any significant performance gains.
If you want a fast drive, ditch the kiddy Raptors and pick up Cheetah 15k.4 (or 15k.5 with STR higher than 100MB/s and 3.2sec RAT). U320 15k RPM SCSI drives will still outperform SATA2 drives by a good margin in all tests.
That is correct. However, it IS directional, so your STR (which is upstream) will still be capped at 1 direction * 2.5Gb/s/direction = 2.5Gb/s excluding the packet overheads.Quote:
Originally Posted by derektm
See here:Quote:
Originally Posted by vitaminc
http://cracauer-forum.cons.org/forum/raid.html
Pure software raid-0 on three 7200.8 on NVidia NForce SLI gives:
- 182.59 MB/s (191457784 B/s) (43.8% CPU, 4267.8 KB/sec/CPU) write
- 159.23 MB/s (166967535 B/s) (43.1% CPU, 3779.7 KB/sec/CPU) read
That is total, average speed of reading/writing a 16 GB file in 8 KB blocks using a machine with 512 MB RAM.
To tip it off, you can look for the number of delivering this over NVidia's GbE at the same time.
:slap: Don't argue about raid with uOpt.. he is always right..(atleast when it comes to raid1, Raid0 and onboard raid)Quote:
Originally Posted by butter_fry
Just got WD 320GB Sata2 16mb for storage so i used my old 2x120gb Hitachi sata for raid0 this is what i got with it.
http://img272.imageshack.us/img272/9418/raid08wl.jpg
I don't get the context here.Quote:
Originally Posted by butter_fry
I was bashing onboard SATA raid raid-1, not raid-0.
1) You quoted the beginning of RAID but not end of RAID, which would be a lot lower. What's more important is the average STR thoughout the whole HDD, not the STR at the beginning of the disk (fastest part).Quote:
Originally Posted by uOpt
2) Software RAID0 results in 40+% CPU utilization.
3) FreeBSD instead of WinXP. Two different file systems. Minor impact on benchmarks.
To quote from that bench:
Always wonder why people expect any decent performance with integrated RAID controllers when they know stand alone gfx card will always beat the integrated chipset video solutions.Quote:
First of all, forget about all these cheap and/or onboard SATA "RAID"
controllers, they just do software RAID in the driver, and you will
lose your array if a disk fails when the OS is not up (read horror
stories on the anandtech forums and elsewhere).
Reasons why not to use RAID0 beyond 2 drives:
1) random accesss time will increase as you add more drives to your array (duh)
2) incremental burst read speed will decrease as you add more drives
3) sequential transfer rate scales the best with RAID0, but will also decrease incrementally due to packet overheads.
I have the worst results I think...spawning from two Hitachi drives. It's irritating too!
I have 4 WD Raptors 36G Gen 1 on the Nvidia Controller with a 16k stripe. I made a single partition on the drive with NTFS. I am very pleased with the performance, I used to have two but went ahead and added the other two into the array, forget benches, I definitely noticed a difference loading games, windows, reading files, so on and so forth, but if you want a benchmark here you go...
I would think the 2x400 array would bench a little higher. This is my 160x2 on the Nvidia controller.
http://img160.imageshack.us/img160/4780/hdraid0le.png
Do you have HDTach bench results for the drive in non-RAID configurations? It seems your HDD is deteriating...Quote:
Originally Posted by Reinvented
If you connect the peak of your results, you will get a curve, and that should be your "real" RAID0 x2 results. All the clippings (the STR decreases in between) are results of signs of failing HDD. S.M.A.R.T. test will only show failing HDD when its in the end of its life, not when it has signs of failing.
Your STR is definitely capped by the junk NVDA controller.Quote:
Originally Posted by Hassan
WD Raptors 36G should have about 75MB/s at the beginning of the drive. Your 4x RAID0 array has only 140MB/s ~ 200MB/s at the beginning of the array. You are losing your sequential read speed. Period.
Your burst speed and RAT looks reasonably normal, but even raid controllers from junkyards won't impact those.
You need a hardware RAID controller. PCIe ones are available that are pretty good but cost around $350-400 for a 4 drive array. The Areca 1210 offers the best bang for the buck IMO, but YMMV as to wether it is worth it or not as even though RAID is nice the price vs. performance ratio sucks.
Anyways... for comparison here is my benches with 4x300GB 16MB cache MaxLineIII's, NCQ off on PCIe x8 Areca 1210 hardware RAID controller.
http://img242.imageshack.us/img242/8...xlineii.th.jpg
The embedded 128MB cache really improves performance for desktop usage in general, the incredible STR makes it all worthwhile though.
Yeah I have a feeling that they are dying also.
I can sometimes hear them halt, and then grind up really loudly like it's seeking - but it's really not. Sometimes it will do it when it's showing the RAID configuration at post.
This was the highest one I got out of 5 tests.
http://img193.imageshack.us/img193/3...results3bl.jpg
Also - some more information regarding these two drives of mine:
-The first one I had before was bought from ZipZoomFly, and is still working to this day. Purchased in July.
-The second drive, I have was purchased from NewEgg.com and failed after 3 days of use. It refused to spin up and froze at post for more than 4 minutes.
-Second drive was RMA'd back to NewEgg, and another drive took it's place.
-Both drives have different firmware versions.
-Both drives have the exact same settings with Hitachi's Feature Tool.
-Both drives have a grinding sound sometimes after post when it's trying to scan for RAID array.
I'm gonna try the Drive Fitness Tool in a bit, and then see what comes up. I hope I don't have to RMA them, as I don't have any ESD bags. (It's required for an RMA, along with FOAM padding along with other things.)
Edit: Also, these are using a 16k stripe.
Quote:
Originally Posted by mesyn191
IMO its worth it. After all, we are on the Xtreme Systems forum, not Dell helpdesk. :p But its certainly hard for the normal overclocking joe to realize how to set up RAID properly.
Suggestions for your RAID array if I may. I see no point of getting 1.2TB array unless its your storage drive, but even then, it would be faster and safer in average to use independent drives mounting as directories as MTBF connects in serial significantly kills your data security. Independent drives mouting as directories will still be faster than 1.2TB partitioned, as your RAT will only deteriote as you RAID.
You can have the drives spin down to reduce the acoustics, but that will hamper your performance a bit.Quote:
Originally Posted by Reinvented
I had the same problem with one of my Cheetah 15k.3, even after I flash the firmware from 0004 to 0007. RMA gaved me 0005, and it runs smooth. :)
I wouldn't use 16k strips if I am not hosting a web server.Quote:
Originally Posted by Reinvented
1) Most of the files (MP3, JPEG, Word/Excel, etc) on a typical HDD should be larger than 64k, so RAID0 64k strips will benefit those files.
2) Using 16k strips in a normal gaming desktop will significantly reduces the performance as the packet overhead will jam the pipe.
Smaller strips are not always better as overhead weights in and kills your incremental performance gain. My $0.02.
I am just saving up for the 3ware 8 port SATA II pci-e controller... but I read some have compatibility issues with enthusiast nf4 boards as the pci-e port was really designed for SLI and not a RAID host adapter
Try the PCI-E X 4 slot instead? Guys over at 2cpu have had success using the 8X slots with the Asus and Dfi boards so I would *pressume* that the 4X slot would work fine. I haven't tried personally.............yet.Quote:
Originally Posted by Hassan
Got off the phone with Hitachi GST Support, and they mentioned a firmware update. We will see if it helps since my drives have like the first two revisions of firmware...haha. I will be testing this immediately.
I have it available for download for those of you who want to update it. Just PM me, and I will email it. It is in .iso format, and must be burned. The instructions will be available as well.
I wouldn't use thie the 3ware card, its performance has been shown to be fairly low for all the money you're spending.Quote:
Originally Posted by Hassan
http://www.gamepc.com/labs/view_cont...50sx4lp&page=6
Those benches are for the 4 port and not 8 port version but they're essentially the same hardware, just more cache IIRC for the 8 port version.
I've got external and internal hard drives for back up, honestly though I could care less if the array gets hosed. I'm looking to run it in a desktop environment, not a mission critical server. Worst comes to worse I'm prepared with all my data on the back up hard drives, I've also got a custom install DVD for winxp that has all my drivers and some of the smaller apps (ie. CloneCD, DVD decrypter, etc.) on it and fresh off from a format I've timed a install process as being roughly 7 min. on my array. :p: IMO the inherit loss of reliability in a RAID array for desktop environments is massivly overblown as well. My particular array has been up and running since mid-August 2005 with no issues...:banana:Quote:
Originally Posted by vitaminc
I've always sworn by 3ware but I am seeing Areca's name more and more maybe I'll try that on out instead, thx for the linkage
That is because right now Areca is making the world's best PCIe SATA raid cards..Quote:
Originally Posted by Hassan
Sorry if this has been answered already (didn't read the whole thread), but what controller should I get then for SATA, to inrease performance? Keep in mind that it has to fit the Expert (I don't want to stick anything between the GPUs), so that leaves me with PCI as the only option....
If all you can use is PCI slot based RAID cards then forget it and just stick with the NVRAID as with 4 drives you'll run up against the bandwidth limits long before you reach peak performance.
umm the Expert has Two X16 slots so one Graphics card and one Raid card..Quote:
Originally Posted by D_o_S
should work nicely..
PCIe is what you should get I suggest Areca
So, yeah I can agree with the Areca conclusion. I put my Areca 1210 pci-e under the microsocope today and this is what I came up with.
My previous fastest was a PCI-X based Adaptec card, the following are both those cards going head to head.
This first image is a reasonable 2x raid0 setup using the WD RE2 400yr drives.
The sequential stripe is identical, but notice what some well tuned cache can do for burst performance. Red is Adaptec, Blue is Areca
http://home.comcast.net/~butterfry/h...EC.compare.jpg
In this shot I'm using a 4x raid0 of the same WD drives. While I'm not advocating this being a good setup, it is the best way to test the limits of a card, and the bus.
http://home.comcast.net/~butterfry/h...EC.compare.jpg
nice scores, came out to be a little better than mine with those 400GB WD's.
i've just got 4x74GB raptors and plan to raid-0 them. dont want to purchase a seperate PCI-E controller tho so it will go on the NF4 ports on my SLI-DR mobo
read earlier in this thread, i've posted a screenshot with what happens to the NF4 controller with that much data.
Um... I'm using 2 6800 Ultras in SLI ;)Quote:
Originally Posted by nn_step
I can't get Areca.... so should I go for Adaptec? What about High Point?
Adaptec is usually OK, Highpoint sucks.
Do a seach no matter what for reviews on the card you're interested in before buying.
if you can't go PCIeQuote:
Originally Posted by D_o_S
and are going eitheir PCI or PCI-X, get Adaptec.
i would think that the integrated controller would be better then one running on a regular pci bus am i wrong?
I used to think that also. I guess it pretty much will vary from computer to computer you know?Quote:
Originally Posted by brandinb
So far some of these cards are boasting about 200% increase over onboard based SATA/RAID controllers. Too bad they aren't cheaper...which makes the onboard stuff more ideal...:stick:
Its only for 4 drive arrays you'll see that boost over the integrated RAID (lol, its really software RAID which for most things sucks). For 2 drive RAID 0 you can actually get fairly close to the performance of these hardware controllers. Only for RAID 0 and 1 though, if you want to do something like 0+1 or RAID 5 those integrated controllers offer truly abysmal performance, I've seen some people get around 5MB/s with em'...
Yea, those prices for the hardware cards looks pretty scary at first, but you gotta remember that you can use em' across mult. machines over the years the cost gets spread out. Well worth it IMO as I plan to use my Areca card for at least 4 years or so before it gets sold off or used to build a backup house server or something.
I quoted the beginning of the disk, because anyone with a brain will place partitions that hold large contiguously read/written files at the beginning of the disk.Quote:
Originally Posted by vitaminc
The 40% CPU at that speed is mostly filesystem and controller driver, it has nothing to do with the software raid. A hardware raid won't have any lower CPU utilization here.
As for reason 1) of yours: clearly the random read access time stays the same for me with increasing number of drives, I don't know where you would see a possible slowdown. If you do some simultaneous writes things look even better.
Reason 2), I don't know what you call an "incremental burst read", if it is what the choice of words might imply, then this is something that the OS/filesystem's readahead functionality takes care of. It has nothing to do with the disk controller, disk, or RAID.
Reason 3): paket overhead?