I don't get the context here.Quote:
Originally Posted by butter_fry
I was bashing onboard SATA raid raid-1, not raid-0.
Printable View
I don't get the context here.Quote:
Originally Posted by butter_fry
I was bashing onboard SATA raid raid-1, not raid-0.
1) You quoted the beginning of RAID but not end of RAID, which would be a lot lower. What's more important is the average STR thoughout the whole HDD, not the STR at the beginning of the disk (fastest part).Quote:
Originally Posted by uOpt
2) Software RAID0 results in 40+% CPU utilization.
3) FreeBSD instead of WinXP. Two different file systems. Minor impact on benchmarks.
To quote from that bench:
Always wonder why people expect any decent performance with integrated RAID controllers when they know stand alone gfx card will always beat the integrated chipset video solutions.Quote:
First of all, forget about all these cheap and/or onboard SATA "RAID"
controllers, they just do software RAID in the driver, and you will
lose your array if a disk fails when the OS is not up (read horror
stories on the anandtech forums and elsewhere).
Reasons why not to use RAID0 beyond 2 drives:
1) random accesss time will increase as you add more drives to your array (duh)
2) incremental burst read speed will decrease as you add more drives
3) sequential transfer rate scales the best with RAID0, but will also decrease incrementally due to packet overheads.
I have the worst results I think...spawning from two Hitachi drives. It's irritating too!
I have 4 WD Raptors 36G Gen 1 on the Nvidia Controller with a 16k stripe. I made a single partition on the drive with NTFS. I am very pleased with the performance, I used to have two but went ahead and added the other two into the array, forget benches, I definitely noticed a difference loading games, windows, reading files, so on and so forth, but if you want a benchmark here you go...
I would think the 2x400 array would bench a little higher. This is my 160x2 on the Nvidia controller.
http://img160.imageshack.us/img160/4780/hdraid0le.png
Do you have HDTach bench results for the drive in non-RAID configurations? It seems your HDD is deteriating...Quote:
Originally Posted by Reinvented
If you connect the peak of your results, you will get a curve, and that should be your "real" RAID0 x2 results. All the clippings (the STR decreases in between) are results of signs of failing HDD. S.M.A.R.T. test will only show failing HDD when its in the end of its life, not when it has signs of failing.
Your STR is definitely capped by the junk NVDA controller.Quote:
Originally Posted by Hassan
WD Raptors 36G should have about 75MB/s at the beginning of the drive. Your 4x RAID0 array has only 140MB/s ~ 200MB/s at the beginning of the array. You are losing your sequential read speed. Period.
Your burst speed and RAT looks reasonably normal, but even raid controllers from junkyards won't impact those.
You need a hardware RAID controller. PCIe ones are available that are pretty good but cost around $350-400 for a 4 drive array. The Areca 1210 offers the best bang for the buck IMO, but YMMV as to wether it is worth it or not as even though RAID is nice the price vs. performance ratio sucks.
Anyways... for comparison here is my benches with 4x300GB 16MB cache MaxLineIII's, NCQ off on PCIe x8 Areca 1210 hardware RAID controller.
http://img242.imageshack.us/img242/8...xlineii.th.jpg
The embedded 128MB cache really improves performance for desktop usage in general, the incredible STR makes it all worthwhile though.
Yeah I have a feeling that they are dying also.
I can sometimes hear them halt, and then grind up really loudly like it's seeking - but it's really not. Sometimes it will do it when it's showing the RAID configuration at post.
This was the highest one I got out of 5 tests.
http://img193.imageshack.us/img193/3...results3bl.jpg
Also - some more information regarding these two drives of mine:
-The first one I had before was bought from ZipZoomFly, and is still working to this day. Purchased in July.
-The second drive, I have was purchased from NewEgg.com and failed after 3 days of use. It refused to spin up and froze at post for more than 4 minutes.
-Second drive was RMA'd back to NewEgg, and another drive took it's place.
-Both drives have different firmware versions.
-Both drives have the exact same settings with Hitachi's Feature Tool.
-Both drives have a grinding sound sometimes after post when it's trying to scan for RAID array.
I'm gonna try the Drive Fitness Tool in a bit, and then see what comes up. I hope I don't have to RMA them, as I don't have any ESD bags. (It's required for an RMA, along with FOAM padding along with other things.)
Edit: Also, these are using a 16k stripe.
Quote:
Originally Posted by mesyn191
IMO its worth it. After all, we are on the Xtreme Systems forum, not Dell helpdesk. :p But its certainly hard for the normal overclocking joe to realize how to set up RAID properly.
Suggestions for your RAID array if I may. I see no point of getting 1.2TB array unless its your storage drive, but even then, it would be faster and safer in average to use independent drives mounting as directories as MTBF connects in serial significantly kills your data security. Independent drives mouting as directories will still be faster than 1.2TB partitioned, as your RAT will only deteriote as you RAID.
You can have the drives spin down to reduce the acoustics, but that will hamper your performance a bit.Quote:
Originally Posted by Reinvented
I had the same problem with one of my Cheetah 15k.3, even after I flash the firmware from 0004 to 0007. RMA gaved me 0005, and it runs smooth. :)
I wouldn't use 16k strips if I am not hosting a web server.Quote:
Originally Posted by Reinvented
1) Most of the files (MP3, JPEG, Word/Excel, etc) on a typical HDD should be larger than 64k, so RAID0 64k strips will benefit those files.
2) Using 16k strips in a normal gaming desktop will significantly reduces the performance as the packet overhead will jam the pipe.
Smaller strips are not always better as overhead weights in and kills your incremental performance gain. My $0.02.
I am just saving up for the 3ware 8 port SATA II pci-e controller... but I read some have compatibility issues with enthusiast nf4 boards as the pci-e port was really designed for SLI and not a RAID host adapter
Try the PCI-E X 4 slot instead? Guys over at 2cpu have had success using the 8X slots with the Asus and Dfi boards so I would *pressume* that the 4X slot would work fine. I haven't tried personally.............yet.Quote:
Originally Posted by Hassan
Got off the phone with Hitachi GST Support, and they mentioned a firmware update. We will see if it helps since my drives have like the first two revisions of firmware...haha. I will be testing this immediately.
I have it available for download for those of you who want to update it. Just PM me, and I will email it. It is in .iso format, and must be burned. The instructions will be available as well.
I wouldn't use thie the 3ware card, its performance has been shown to be fairly low for all the money you're spending.Quote:
Originally Posted by Hassan
http://www.gamepc.com/labs/view_cont...50sx4lp&page=6
Those benches are for the 4 port and not 8 port version but they're essentially the same hardware, just more cache IIRC for the 8 port version.
I've got external and internal hard drives for back up, honestly though I could care less if the array gets hosed. I'm looking to run it in a desktop environment, not a mission critical server. Worst comes to worse I'm prepared with all my data on the back up hard drives, I've also got a custom install DVD for winxp that has all my drivers and some of the smaller apps (ie. CloneCD, DVD decrypter, etc.) on it and fresh off from a format I've timed a install process as being roughly 7 min. on my array. :p: IMO the inherit loss of reliability in a RAID array for desktop environments is massivly overblown as well. My particular array has been up and running since mid-August 2005 with no issues...:banana:Quote:
Originally Posted by vitaminc
I've always sworn by 3ware but I am seeing Areca's name more and more maybe I'll try that on out instead, thx for the linkage
That is because right now Areca is making the world's best PCIe SATA raid cards..Quote:
Originally Posted by Hassan
Sorry if this has been answered already (didn't read the whole thread), but what controller should I get then for SATA, to inrease performance? Keep in mind that it has to fit the Expert (I don't want to stick anything between the GPUs), so that leaves me with PCI as the only option....
If all you can use is PCI slot based RAID cards then forget it and just stick with the NVRAID as with 4 drives you'll run up against the bandwidth limits long before you reach peak performance.
umm the Expert has Two X16 slots so one Graphics card and one Raid card..Quote:
Originally Posted by D_o_S
should work nicely..
PCIe is what you should get I suggest Areca
So, yeah I can agree with the Areca conclusion. I put my Areca 1210 pci-e under the microsocope today and this is what I came up with.
My previous fastest was a PCI-X based Adaptec card, the following are both those cards going head to head.
This first image is a reasonable 2x raid0 setup using the WD RE2 400yr drives.
The sequential stripe is identical, but notice what some well tuned cache can do for burst performance. Red is Adaptec, Blue is Areca
http://home.comcast.net/~butterfry/h...EC.compare.jpg
In this shot I'm using a 4x raid0 of the same WD drives. While I'm not advocating this being a good setup, it is the best way to test the limits of a card, and the bus.
http://home.comcast.net/~butterfry/h...EC.compare.jpg
nice scores, came out to be a little better than mine with those 400GB WD's.
i've just got 4x74GB raptors and plan to raid-0 them. dont want to purchase a seperate PCI-E controller tho so it will go on the NF4 ports on my SLI-DR mobo
read earlier in this thread, i've posted a screenshot with what happens to the NF4 controller with that much data.
Um... I'm using 2 6800 Ultras in SLI ;)Quote:
Originally Posted by nn_step
I can't get Areca.... so should I go for Adaptec? What about High Point?
Adaptec is usually OK, Highpoint sucks.
Do a seach no matter what for reviews on the card you're interested in before buying.