PDA

View Full Version : What do u think about RAID0?



RBS
07-18-2008, 02:31 PM
Hi @ll, I´d like to know ur opinion about RAID0 configs. Regards

stevecs
07-18-2008, 03:38 PM
Personally the phrase of "friends don't let friends run RAID-0' comes to mind. The reliability/availability of any the array is MTBF/# drives in array. Having run numerous systems and many disks over the years I would never recommend a raid-0 for any production work of any type. (build up other arrays with redundancy to reach the desired performance goals, but never a non-redundant version of it). I know I'm probably in the minority here though for this.

Basically you pick your raid level depending on what your requirements, goals, risk profile et al is. You can take a look at the spreadsheet here http://www.xtremesystems.org/forums/showpost.php?p=2728049&postcount=52

updawg
07-18-2008, 05:03 PM
I run two 74gb raptors in raid 0 and save all my media on additional storage drive. I just keep nothing important on the system hard drive. I recommend it because It's extremely fast, but as the poster above pointed out you do have that risk of data corruption. I have been running on raid-0 for 2yrs straight without a hiccup though.

vaio
07-18-2008, 05:05 PM
Great for general use but not recommended for systems where data cannot afford to be lost. :)

XS Janus
07-18-2008, 07:55 PM
Anyone tried running dedicated drives vs. Raid0?
How do they compare in everyday computing like general PC usage and some gaming?
I would love to see that tested myself.
:)

VIctorj
07-18-2008, 10:44 PM
I would want to know...cause Raptors are extremely expensive here...what if i get 4 hitachi 80gb 8mb sata2 drives on a raid 0? What do you guys think?

Salahuddin
07-19-2008, 06:05 AM
I have both my systems setup in RAID 0 without any backup storage. All my information is on it. I notice a performance increase particularly when loading up programs. I do DVD Data backup every 6-12 months but I suppose I would lose some important information if the array ever went down between these periods.

Perhaps I've just been fortunate, but I've never had a hard drive fail on me, even after >10yrs of use.

The way I see it is RAID 0 will fail if one of the HDs fail. If the chance of one HD failing is say 2%, you're looking at a 4% chance of the RAID 0 failing. Sounds to me like its still pretty safe, but that may be an incorrect way of looking at it.

I reviewed that spreadsheet. Not sure if I read that correctly, but it seems like the mean error rate from a RAID 0 array is just under 5 years? Is that correct? I wonder how one would you actually figure all that data out? How long would you have to be using the drive for on average per day?

stevecs
07-19-2008, 07:18 AM
Yes, for physical failure issues the probability is increased by the # of drives, (same as saying mtbf/# of drives) but then you have cabling issues (loose cable/power, connectors) which is very prevalent which would increase the risk. If you're lucky or comfortable with the risk fine, just want to make sure that we're all aware of it.

Salahuddin
07-19-2008, 07:28 AM
So these are my questions:

Did I interpret that spreadsheet correctly? Is the mean time for data failure on that graph the hours+days+years under the results? I see a note saying 8h/day for desktop use and 24h/day for enterprise, are these values for 8h/day usage then or 24h? How did the tester actually test the drives to come to these conclusions? Can you convert the information on that spreadsheet to some hard percentage or duration for when I should expect a hardware failure on my RAID 0 arrays if I used it say on average 4 hrs per day, or is that not possible?

I've had 2x Hitachi T7K250s in RAID 0 for almost 4 years now, although they have went to several reformats due to upgrades, motherboard returns/RMAs etc. One of them even has the plastic that holds the plugs of one of the SATA cables a little damaged so that plug can come out easily if hit from one side :( Now this PC is on for most of the day because I have a magicJack attached to it (little unit for free long distance), so the computer must be on, but HD activity is pretty minimal.

Does reformatting and reinstalling an array protect from data loss from bit errors?

I think I'll just make sure a copy of all sensitive information is on the other PC just to be safe, but I've always thought the chance of RAID 0 failing was pretty low.

xMrBunglex
07-19-2008, 07:30 AM
for general computing, you will see your OS boot up faster with RAID-0. you will also see programs load faster. for gaming, the game will start faster and new maps/levels will load faster.

i love how people always talk about the danger of losing all your data with RAID-0. the same applies for people with single hard drives too, and they represent the majority of casual PC users out there. i run RAID-0 with a third hard drive inside my case to act as data storage for my big files. i have S.M.A.R.T. turned on. i know it isn't perfect, but i feel relatively safe. instead of waiting for my hard drives to fail, i'll probably just replace them after three years of service or so.

Salahuddin
07-19-2008, 07:44 AM
That's a pretty crappy 3DMark06 score. You must be ranked like 50th and have a slower score than 3870 X2 is crossfire... wth :)

stevecs
07-19-2008, 08:18 AM
Desktop drives (Seagate "AS" lines, or pretty much any manufacturer's NON-enterprise drives are rated for 8-hour a day operation). So all the MTBF ratings et al are based on that. Enterprise drives are rated for 24x7 operation. So a desktop drive of 1,000,000 hours is really much less than an enterprise drive at the same rating. That spreadsheet is a calculation sheet. YOU put in the data in the yellow for your particular drive that you want to model. Everything else is based on that. It is a statistics spreadsheet, it is NOT a guarantee of any particular drive or solution. It the statistical probability of a set of drives in a particular layout to failure and to the point of data loss based on the underlaying hard drive components (ie, not including cables to the drives or the controller itself not anything higher than that like filesystem, OS, or user error).

The hour/day/year is just a human readability expedient, I build arrays for numerous purposes (up to hundreds of drives per array) some arrays have a lifespan requirement of days or weeks, others may have lifespans for years or longer. The result area I have coded for basic rules of thumb (green=good, orange=be aware/careful on the edge, red=don't do it). for MTTDL (mean time to data loss) should be about 10x or greater than the expected service life of the array). The other item to be careful of is the probability not reading all the sectors in the array. This is where larger arrays/drives kill you. RAID protects against some hardware failure modes but after a certain point in number of drives/sizes the bit error rates overtake the raid's ability to recover. To mitigate this you create multi-level raids (raid 61, 51, or 100, or other combinations depending on your need). There is a short overview of the different raid levels and how they are laid out on the disks in there as well for edification.

reformatting has no effect on bit errors at all. A format is a higher-level function. Bit errors are random though have a higher probability of occurring with the size of the drive/data being transmitted.

The sheet was made up mainly to help some of the co-workers here at work and to avoid having to do the calcs by hand each time as it got old. The performance #'s for IOPS in the parity raids 3/4/5/6 for writes is wrong I know but haven't updated the spreadsheet for that as that was just an afterthought to add in the first place as too many people were asking about performance. I really need to go back and fix that plus add in at least cable, controller, mtbf ratings to the calculations.

Loony
07-19-2008, 08:34 AM
I had four Hitachi drives in RAID-0 for over two years on a 24/7 machine and never had any problems. :shrug:

As long as the drives to get too hot (or too cool) and you don't shake them while their reading/writing they should be fine.

If it's for a place of employment though...I'd say RAID 5 or 6; definitely not 0.

Salahuddin
07-19-2008, 11:23 AM
Wow stevecs,

You sure know your stuff.

Thanks for all the info, didn't actually realize that YOU developed that spreadsheet.

When I get home I'll try to input all the required info and see what it churns out.

Addendum: Was just fooling around with the spreadsheet... you mentioned for a desktop drive, they are rated for 8h/day operation. Is that the same as "Rated Duty Cycle Of Disc" because when I change that from 24h to 8h, the RAID 0 results get pretty scary. And where do I enter in the number of drives in the array, is that "Stripe Width?"

Addendum 2: Ok, if I change Stripe Width to 2, Rated Duty Cycle of Disc to 8h, HD size to 500GB, read seek and write seek to the appropriate settings, (couldn't find any more information for my drives and read/write seem irrelevant to the calculation anyways) it gives me a median failure rate 22.7 years for RAID 0 (and the box turns orange). Does that sound right?

fng77
07-19-2008, 11:29 AM
I Like it.

motopen1s
07-19-2008, 12:27 PM
I Like it.

lol, that is exactly what I wanted to answer :)

stevecs
07-19-2008, 12:33 PM
Yes, the rated duty cycle of the disc is what you want to change to indicate the type of drive you have. General rule of thumb is that most manufacturers rate the duty cycle (if you can even get it without calling them on the phone for hours) for desktop drives to be 8 hours. Enterprise drives are 24. You modify that in relation to the type of drive you have.

Stripe Width is just a means to help clarify in arrays how data is laid out. Basic terminology would be "Stripe Size" (otherwise known as 'chunk size' in some software raid circles) is the amount of data per logical block that is written to an individual disk in an array. "Stripe Width" is to indicate the a complete parity block (ie, "Stripe Size" * "# drives in the base array") So for example a array of 10 drives, with a stripe size of 64KiB would be 640KiB. The 'user' data in that section would be "Stripe Width" - "Stripe Size" * "# parity disks" so say for the same array a raid-5 would have a user data portion of 576KiB or RAID-6 would have 512KiB. This is used more for tuning an array to certain types of workloads. Generally speaking for home OS use, the workloads are so varied you would be better off with keeping the default for the controller/system you have as that's generally the one that is optimized the best.

Yes, getting information on the drives is getting harder (especially for WD drives as they normally don't like publishing the information. You have to badger them to get it). Anyway 2.27E+01 is just notation (sorry, had a bunch of science background) for 22.7 years. 2.27E+02 would be 227 years. But like I said before this is all statistics across large samplings. So from a generic rule of thumb is to take that # and compare it against your service lifetime (ie, how long the drive/array needs to be in production/operation). If it's < the service life it's red (high probability that it will not last/have an error/data loss event). If it's >= the SL but < 10 times the SL then it's orange this is the 'careful' range, for enterprise/business/and most production uses you don't want to be here. Then there is the green > 10 times the SL. At this point your array from a RAID standpoint is good (you still need to watch the probability of not reading all sectors as I mentioned before).

This is good to know if you're well over the 10x rule of thumb to think about other types of arrays or multi-layer organizations. Around here @ XS with the limited number of drives that most people have it's on the extreme small scale of arrays. Most arrays don't really start taking off with less than 8 drives. When you start getting into 12+ drive arrays you start to saturate channels and then you need to go to multiple cards/multi-level raids, and then you hit other blocks when you start getting into the very large systems (mainly limits on I/O, # drives per base array, number of arrays, BER rates for the entire system, et al.) For example the Seagate Savvio (2nd array calculation I think I have in the sheet) is the top drive today with BER rates being the lowest in the industry. Looks wonderful for most systems. Now change the # to 128 drives. BER rates are still good (sector reading probabilities) but if you were to put them into a single array (not possible today as the max is 32 drives) you'd fail. However if you had that as 4 32 drive arrays your back in the running. Try to do that with drives with 1E+15 BER rates and you'd blow yourself out of the water.

To give some examples, some of the larger builds we have are over 1000 drives. though granted that's not common (mainly for virtualization of many databases so IOPS is a must as well as streaming performance).

Lekko
07-19-2008, 06:26 PM
Personally, I run a dual Raid-0 array on my system mirroring the files I want to have in case of failure. This way I have things like music/pics/work files/ect backed up across both arrays, yet keep all of it accessible.

I do NOT back up program files or OS files or other random junk, only the stuff I would care about if lost. That way I can keep critical things backed up, yet still have more space free since program files and junk files are not redundant. Plus, everything runs at Raid-0 speed without the lag of mirroring or any other parity calculations. I'm no expert on data storage, but it works out pretty damn well for me.

RBS
07-21-2008, 12:10 AM
Thanks for your opinions guys/gals. I´ve been with RAID0 about 5 years and never had any issue, but here people saying that RAID0 is just marketing and there is not any performance gain. I disagree this cos really the system boot up, apps, games, etc load faster. My experience tells me that if you know what to do u won´t have any issue with this system, I mean, good refrigeration and good anchorage.

cablesguy
07-21-2008, 12:19 AM
am on single drive now, i agree with RBS, there is a noticeable difference, and i nvr had any probs with Raid 0, will be back on raid with the next reformat, prob with an add in card this time.

Jim Morbid
07-21-2008, 12:23 AM
Is there a sufficient gain from using a 'software' RAID solution like those found on the Nvidia and Intel boards (Matrix in the Intel's case)?

JM

cablesguy
07-21-2008, 12:51 AM
Is there a sufficient gain from using a 'software' RAID solution like those found on the Nvidia and Intel boards (Matrix in the Intel's case)?

JM

speaking for the Intel's, its not blazing fast if thats what u mean, but u will feel the difference here n there. Nvr used Nvdia so im not sure altho i read theres quite a few issues with theyre onboard.

Jim Morbid
07-21-2008, 12:58 AM
Another question, is the best speed and redundancy method RAID5 or a RAID10 (01) setup? Basically sick of using raptors (especially with 32mb cache sata drives) and would like a speedy and redundant storage solution for my work/gaming PC.

JM

cablesguy
07-21-2008, 01:27 AM
if youre looking for speed, im pretty sure Raid5 is not the way to go, Raid 01's i shud think as opposed to Raid10's, a better option would be looking at an add in raid controller with write back cache enabled + the battery back up for the card....and just run it on raid 0's with a minimum of 3 hdd.
Would be nice to hear what the experts hv to say.

Im currently studying some options but instead of speed im looking for redundancy, whihc so far leads me to conclude Raid 6.

Jim Morbid
07-21-2008, 01:42 AM
Raid6?

Really dont want to additional RAID controller due to cost and space really.

JM

stevecs
07-21-2008, 04:41 AM
@Jim Morbid/Cablesguy for speed it would be (fastest to slowest) 0, then 10 (you don't want to do 01 but it's effectively the same as 10 if you only have 4 drives). Then next it would be 3/4/5 then 6. For IOPS the difference is huge as with parity raids (3/4/5/6) you are limited in IOPS to that of a single drive assuming a random write workload. This is not true for 0 or 10 where you increase the# of iops when you increase drives which is why parity raid is bad for write intensive applications.

As for availability (sometimes referred by reliability) of the array would be increasing from RAID0, RAID3/4/5, RAID-10, RAID-6 for the base raids.

So basically if you want the best combination of speed & reliability it would be RAID-10 the only downside that would have would be the number of drives (the reason why the parity raids were invented to improve storage efficiency but at the cost of lowering performance). A benefit of raid-10 would also be that it is a NON parity raid so you don't need any fancy controllers unless you want the cache or other reasons.

Jim Morbid
07-21-2008, 04:55 AM
Thanks Steve.

I though a parity array would beat a 10/1 mirror away in the availablity section any day purely because you can reconstruct the raid on the fly (well, I lie, I know that is try for 5 but TBH I've never even heard of 3/4 [Google time!]).

I dont think that my current setup is that IO intensive (just windows and gaming) but my new setup might be: basically I'm going to install an Ubuntu VM, give it half the available space, setup a cryptFS and share it as a network drive. Looks like I'll have to check what the motherboard supports raid wise and then do some benchies.

JM

stevecs
07-21-2008, 05:29 AM
Actually incorrect you would have higher availability and rebuild performance under raid-10 than under any of the parity raids. Remember that raid-10 every PAIR of drives could rebuild at the same time, then you have the additional hit of the random performance of parity raids (ie, if you were trying to /use/ the array during a rebuild you would take a real hit).

Nothing beats real-world #'s so yes really bench it for your application/workload. The above is based on worst-case scenarios (the only thing that can really be compared in general terms) as everything else is workload specific which requires empirical data. The programs I like are iozone, XDD, (both windows & unix) and bonnie++ (unix). Just remember to take multiple readings (throw out the worst/best and average the rest).

cablesguy
07-21-2008, 06:47 AM
Thx for the explanation Stevecs..so basically raid 6 would be the way to go for redundancy peace of mind, been meaning to ask this question, how much of a difference would an Enterprise drive make against a Desktop drive in a raid array?

Scenario
Say for eg a Seagate ES.2 against a WD Caviar Black or Green , in a raid 5 or 6 via an add in raid controller, running mainly as a multimedia server and dl/ing torrents, so basically 24/7

As the Green's pricing comparatively is very attractive.

TIA

stevecs
07-21-2008, 07:14 AM
There are a lot of factors involved the main ones are duty cycle, vibration tolerance, an time limited error recovery. Enterprise drives have firmware and components that allow for a 24-hour duty cycle (always on/working), have MUCH higher vibration tolerance (ie, 8, 10, 100, 1000 or more drives in a chassis/san), and time limited error recovery where the drive if it encounters an error will attempt to recover the information only for a very short period of time then would just tell the controller/OS that there was an error and let the higher logic of the controller/os handle the situation. ALL of the above is very important in raids.

With desktop drives they are normally rated for 8-hour duty cycles (ie, you turn the computer off after 8 hours/day), they do not have any vibration tolerance to speak of so the more drives you have in a chassis the more positional recalibration is needed which could drop a drive offline from a raid point of view or produce other errors. Then the time limited error recovery, on desktop drives the mindset for the manufacturer is that this there is NO other logic outside of the drive for recovery so the drive will attempt EVERYTHING possible to fix errors (can take several minutes or longer). This is good for a stand-alone drive on a controller but /very bad/ from a raid point of view as it will cause the array to degrade and since there is a higher probability of a 2nd drive error after your first one this can cause cascade failure for the entire array.

Yes I cannot comment on price, they are attractive, however over the past 30 years of playing I have never seen where the saving of a couple $$ has been any benefit when you take in the additional risks. You may be lucky, or buck the trends for a while, just so that you know that you have a much higher risk of loosing the array. Only you can assess the value of the data you are storing to make cost/risk justification.

Also just to mention here again, RAID-6 is the highest-availability of standard raid levels, however /NO/ raid level will protect against data loss/corruption. You need backups. Remember, RAID's are for array Availability not data integrity those are two separate things. You should always have a backup of your data and a backup scheme to keep it up to date.

cablesguy
07-21-2008, 08:14 AM
Thx for the reply Stevecs, nvr actually thot of that, in a raid array where the number of drives tend to increase over time, vibration would tend to follow, and not forgetting the accompanying heat.

And Thx :up: im aware of raid being only a redundant and not a backup solution perse.

Jim Morbid
07-21-2008, 09:14 AM
What a killer explanation. Thanks Steve.

JM

stevecs
07-21-2008, 10:43 AM
No prob. Glad it helped.

Salahuddin
07-21-2008, 09:05 PM
I really don't know what to do know. After reading this post and reviewing a lot of articles on the Internet, seems like the general conclusion is that RAID 0 offers little performance benefit for a gaming PC.

I was thinking of just making copies of all sensitive data on the opposite PC (each of which have two HDs running RAID 0).

But now, if RAID 0 isn't really doing much, I'm thinking I should just format the HDs and run two single HDs on each PC. Apparently, using onboard RAID (I have Intel ICH9R on both PCs) can take more CPU overhead thereby reducing performance. Not only that, I read that sticking Windows on one HD and the Page File on another can actually increase performance where it really counts.

What would you recommend the best setup for two single HDs is stevecs for a gaming PC? I'm not overly concerned about data since, as I said, I can make copies of all sensitive data on the other PC, plus I make DVD backups every 6 months or so. But, a sudden crash of an array would be quite frustrating. Should I stay with RAID 0 or switch?

zanzabar
07-21-2008, 09:14 PM
why are u guys so anti semi-software raid for 0/1 it dosnt have that much cpu overhead (raid 5 im not advocating semi-soft) and most games wont use all of your cpu so its not a big deal to have 1-2% used when writing large files and less from reading. the loading times and responsiveness in windows and loading large files out ways the risk IMO if u have and are using a backup plan


and if u want reliability dont buy seagate or WD, in enterprise boxes ive only seen sata hatatchi ultarastar/cinastar and samsung (f1/raid and normal) and samsung makes the fastest 7200 rpm drives, then only fujitsu for sas/scis


also why raid 6, the chances of 2 drives dieing at the same time is vary low and unless are putting a box at a colo raid 5 is more useful or raid 1/10

stevecs
07-21-2008, 10:21 PM
@ Salahuddin - For a gaming system I would go for fast drives (higher RPM) to cut down on IOPS for the OS (say 10K rpm) with the highest bit density I could find (to increase streaming read/write) and would probably still use raid-1 as this provides 2 copies of data so with a good raid setup it will help in reads and still provide some additional availability. However, it depends on the types of games and even specific games to determine what types of disk subsystem requests are being made to have any real impact on that part. I would turn off your pagefile and buy ram in any case. From your .sig you have 4GB now I don't know if that's XP or vista or 32 or 64 bit. Regardless turn it off and run your games if you run into a memory resource issue buy more. In any case most games are GPU or CPU bound, there is not much (after loading a level/sector into memory) that a drive really has to do. You can run some disk tests to see your workloads when you're doing your normal applications, I wrote a fast spreadsheet that does that under windows here: http://www.xtremesystems.org/forums/showpost.php?p=2729437&postcount=54

@zanzabar the problems with software raid is that the additional hits to cpu cycles and low level driver I/O does add up, plus issues as to OS/driver compatibilities, and poor south-bridge bandwidth (opposed to a PCIe card but only really an issue with more than a couple drives), then there is the maint/ease of running/use, hardware raids have a lower ongoing support cost (time/level of understanding needed for the tech to fix/repair/maintain (more important in business than perhaps here). If, in your workload you don't see any problems with your system's performance nor any issues with the above great! You've got a good solution for yourself. The trap is not to assume that all workloads are the same or the same solution will work the same way for others.

As for HD manufacturer reliability, I have the same fallacy in that out of personal experience I prefer Seagate drives over others. However when you take large samples of drives (and here I mean several hundred thousand or more) where they become statistically significant the differences in hardware failure rates between vendors is quite low. Most of the times it's bad batches or firmware (software) incompatibilities. You generally end up picking a drive that works well with your controller/setup. You really need to follow the hardware vendor certification list for your particular setup as most only test a very small number of combinations. Find something you like and go with it if it works. This should generally take precedence over spec sheets as it doesn't matter if something is X faster if it only works <100% of the time.

As for probabilities of multiple drives failing, check the spreadsheet that's why it was created. parity raids were developed for storage efficiency & availability more than performance. If you do not plan on growing an array to large sizes (many drives) and if it's not for a storage (or predominately read oriented subsystem) then yes raid-6 may not be a good mix for you. RAID-10 may be better. RAID-5 is /ok/ but only for smaller arrays due to bit error rate issues which is more prevalent the larger and more drives you have. There is /NO/ one single array type that is good for everything, That's why there are so many you can choose or create (JBOD/0/1/10/3/4/5/6/100/31/41/51/61/30/40/50/60 et al. All have their places.

Salahuddin
07-22-2008, 08:14 AM
Well, the reason I went with RAID 0 was to get some minor performance gains without shelling out cash for a 10K Raptor with less HD space. I realize that the HD isn't doing much during actual gaming and for me the cost of a Raptor was not justified. But, 7200RPM 500GB HDs are $70 bucks at the moment and I got free RAID on my board, so for well under $200 I can get 1 TB of space and RAID. Or for about the same price I could have got a 150GB 10K Raptor. And the way I see it, the argument against going RAID 0 is pretty much the same argument against going for a Raptor... for a gaming PC there are limited real world performance increases. The main difference between both setups is that is the data reliability on a RAID 0 array is worse, but the Raptors are much more expensive. However, I get the impression that RAID 0 is not really as dangerous as its made out to be. Perhaps I have to have an array crash on me before I open my eyes though.

As for the Page File, I doubt I even use it much. I have Windows x64 Pro and I can turn off the Page File easily, but some programs just want it and cause problems, so I just leave it on.

As for HD failures with different vendors, I've never seen any objective data to show which ones fail more. I was under the impression WD and Seagate were good reliable drives. However, there was a batch of Seagate 500GB SATAs drives that had problems, actually paid a little more to get WDs in recent builds because of that.

stevecs
07-25-2008, 04:31 AM
All of those are good reasons, as long as you've considered the risks that's the main point. As for the pagefile and certain programs requiring it I'm curious as I've heard this before but whenever I've had to look at an example it has always been due to lack of system memory. What programs if I may ask?

For objective studies on drive failure statistics, yes they are hard to come by as many who can run them bottle up the information and don't publish.

I don't have the links handy here (been moving around the past couple days) but look for:
- "Failure trends in a large disk drive population" by Eduardo Pinheiro, wolf-dietrich weber & luiz andre` Barroso of google.
- "Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you" by Bianca Schroeder & Garth Gibson
- "TerraServer SAN-Cluster architecture and operations experience" by Tom Barclay & Jim Gray

And for a general overview of reliability performance and how it's derived (may be too detailed) but this older paper (1994) is good. "Reliability and Performance of Disk Arrays" which was a doctorate dissertation by Thomas Johannes Emil Schwarz.

All of the above are on the 'net.

Salahuddin
07-25-2008, 01:52 PM
Thanks Stevecs,

I'll look through those papers when I get a chance.

As for programs causing problems without a pagefile, Prime95 is the only one I am absolutely sure about. I had issues with occassional lockups with Morrowind as well which seemed to go away with the re-introduction of a pagefile. There was another game too... I want to say it was Doom 3, but I can't be sure as it was quite a while ago.

ihatbs
07-25-2008, 02:19 PM
I've been using raid-0 for maxtor since two years and seagate for 1 year and haven't experienced any damage or data loss.
but still not comfortable with raid0 when moving between different RAID controller chips.

stevecs
07-25-2008, 03:49 PM
I'm running prime95 right now (v25.6) without a pagefile on my game system. It hardly uses any memory ~100MB w/ 4 concurrent workers. I've been running (on/off) with the GIMPS project since it's early inception, if that's giving you a warning you have something else that's really sucking up memory on the system or a screwed up executable. You're using George's right?

stevecs
07-26-2008, 03:02 AM
It really depends on workload. RAID-0 will only improve performance if have sequential requests that are greater than your stripe size (that way more than 1 drive will be used), OR if you have numerous requests that are <= your stripe size, then generally your distribution of requests will hit more than one drive (though the amount of distribution is an unknown quantity (depends on numerous other factors, by pure chance you'll get something but it may not by linear with the number of drives).

For only 2 drives and if you are NOT doing a lot of writing to the drives then I would really suggest RAID-1 not 0 which would give you higher availability, and would improve reads to nearly the same extent as raid-0. (assuming a properly implemented raid routine). Writes would be the speed of a single drive.

If I take your meaning correctly you want to raid-0 one set of drives and then have another (3rd drive) for storage? That's possible, there are zero garuntee's of data corruption or loss. Remember that neither raid or even a hard drive, or filesystem are designed to protect against that. RAID is there primarily for availability (making sure that the drive does not 'disappear' if one of them physically fails. RAID-0 (shouldn't really even be called a 'RAID' as it's not redundant) doesn't even have this.

There is no real answer to what is 'faster' without really knowing your workload and what you're trying to do (what speed are you trying to reach, is it streaming or IOPS?). From what you're saying I don't see any need at all for anything more than a raid-1 of two drives (faster RPM would increase access time/IOPS, and higher density platters would increase streaming speeds). In addition to that you can partition your OS in the first part of the drive and create your 'storage' to be your other partition that will give your OS the highest streaming speeds (outermost part of the drive).

Out of the two drives you picked the seagates will be faster read/write speeds than the raptor but the raptor will have faster access times. If I was only to buy two drives for a system from those choices I would pick the seagates as they have more capacity, higher bit density (faster reads/writes) but lower access times. I would use them in a raid-1 and carve up at least two partitions a 32 or 64GB 'C:' drive for the OS and applications and the remainder to be the 'D:' for the OS for data storage (or game install point et al).

stevecs
07-26-2008, 09:19 AM
Usually on this crappy drive i have when i say extract or compress a 3gb file and then try to go on the internet which requires the HDD to open IE at first or open any normal program or something, or even when im moving 2 large files at the same time. My system is super slow, usually to the point where i cant do anything when extracting a compressed file or something. Sometimes if i push my HDD and do 2 things like that at once my system will hang and i have to stop one to finish the other.

From this it's really going to be an IOPS issue or drive queue saturation. Unless you have something else wrong on your system (ie, usb delays, drive errors (recovery), network delays et al). Unfortunately to solve this you really need more spindles and probably separation of workloads. Unless you know what's being read/written where and what is queuing those requests (diskmon can help with the first but you'll still need to track down the application/serivce) you won't really know what to move where (ie, multiple spindle groups). In light of that, you may need to just increase the # of drives for your array, two drives is /very/ small for a heavily hit system. For example in my gaming system here 4 drives in raid-10 is the lowest number to handle my workload to acceptable parameters without a problem. I added 4 more mainly for space purposes but it keeps drive utilization down enough that I don't have a problem doing video scanning, photoshop (batch work), encoding (dvd to iso's, or cd's to flac), and still have I/O cycles to play some RTS games.

Look for a solution that has more spindles (drives) so you will have more IOPS. From your current system if you have a single drive, you'll do well to double that (raid 0 or raid 1). if you can, quadruple it (especially if you get slower RPM drives as each drive would have less IOPS due to the lower speed).

Salahuddin
07-28-2008, 08:36 PM
I'm running prime95 right now (v25.6) without a pagefile on my game system. It hardly uses any memory ~100MB w/ 4 concurrent workers. I've been running (on/off) with the GIMPS project since it's early inception, if that's giving you a warning you have something else that's really sucking up memory on the system or a screwed up executable. You're using George's right?

Well, to be honest, the problem happened with 24.14 (manual set to run on two cores) initially and I have kept page file on since. As soon as I ran a torture test, within one second I would get a low memory warning. My system is pretty streamlined and most of my memory was free. Of course, now I'm using 25.6, so perhaps the problem is gone.

RBS
08-04-2008, 03:26 PM
Thanks for your replies.

After reading this (http://www.nextlevelhardware.com/storage/barracudaraid/) review, what do you think, trusted or wrong one? In case trusted, I´d like to update my array and make some small volumes for increasing performance and access time. Well I doubt about if I can make some small volumes (e.g. 1 volume of 100 Gb for Vista, 1 of 200 Gb for games, and rest for storage), and if I will get that increased performance in at least Vista and games volumes. What do u think ?
Another thing, I guess if I make these changes I will have to destroy my actual array for making new volumes, right? Well if someone can answer my doubts I will think if remake my RAID0 config, that in this moment isn´t more than 1 Volume with 931.5 Gb. Thanks in advance.

stevecs
08-04-2008, 06:32 PM
The 7200.11's are basically the same as the ES.2's with the exception mentioned previously that they are desktop drives so don't have a lot of the raid optimizations. Those drives have about 78IOPS for read/73 IOPS for write (random full disk). They are not bad drives though you may run into some issues due to the non-raid nature. Partitioning the drives is a good idea as that will at least lock in certain streaming performance values for each partition. Putting the OS's first is good (fastest part of the drive). You won't get much out of the clipping if you are actually using the rest of the drive as like I mentioned each drive has a finite # of IOPS that it can service. If you spread them across the entire disk (by having partitions that use all the sectors of the drive) you'll still fall into the average access times.

What you're describing is pretty straight forward, the main question would be is do you have a performance or availability target in mind?

RBS
08-05-2008, 01:17 AM
Well, I´m not talking about partitions, if not volumes when you make the array. Atm I have 1 Volume of 931Gb with 3 partitions on it. After reading that review and some another forum , I saw that making small volumes, the access and reads are quite more faster. That´s why I´m asking, cos I´m not sure if that performance is real or don´t.
If I really get that improve on access time would be a good deal to make some small volume, maybe 1 of 150?
I asked in the last post if could make some small volumes, but after reading more I think is pointless due to if I make use of 2 of the volumes, the access time will fall again, so, maybe the question would be just to make 1 volume for SO and games for getting that extra performance and another volume (e.g. another RAID0 or even RAID1) with rest of the disc?

Nanometer
08-05-2008, 01:41 AM
RAID-0 doesn't belong in the desktop. The performance increases day to day or so small, and that included game performance. Two important areas improve with raid0 benchmarks, and large file transfers, that's it. If you want better boot times, and a snappier desktop, get a single raptor. I was another one of those idiots who ran 2/4/5 raptors in raid-0. none of them made windows feel faster, not did it help game load times. I spent years working with the raptor arrays, but nothing substantial came through. So now I am finished with raid-0... unless youre talking about SSD, that's a different story all together.

RBS
08-05-2008, 02:15 AM
Well in the review, I can see like except in SO load and game loads (but just for few seconds) the RAID0 with small volume wins that Raptor, that´s why I thought about making new small volume for getting that acces time improvement. In that review with 2x25Gb array the access time RAID0 7200.11 vs 7200.11 vs Raptor is 7.7 / 12.4 / 8.1 (ms). I´ve seen another scores and for example with 100 Gb volume gets around 8-8.3 ms depending of stripe and with 200 Gb around 9.5 ms, not bad for desktop discs.

stevecs
08-05-2008, 03:28 AM
@RBS yes I understand technically that's called a 'volume group or volume set' (raid set is for the physical drives in a particular array). Anyway, it's not a VTOC partition but it's doing the same basic thing each volume set you create on the raid set (physical drives of the array) would be starting from the outer part of the disk moving inward so fastest to slowest. and like I mentioned it has no benefit on access times as you are still using the entire physical disk (does not matter if you're using 1 or 50 volume sets. The head still needs to move across the entire platter.

As for speeds, if you're looking for load times use RAID-1 not raid-0. RAID-1 will give in practice the same speeds as raid-0 for reads if you have a good controller (or software implementation) as you have two copies of data (opposed to raid-0/3/4/5/6 which all only have one copy of information. A good controller will load balance the requests between drives which will give you good performance no matter the request size, with raid-0 you need to have request sizes that are > #drives * stripe size to really have an effect which is not many for general desktop users.

As for your access time improvement that works IF YOU ONLY USE THE OUTER MOST TRACKS of the disk. So if you don't mind buying say a 300/500GB or whatever drive and then clipping it to only use 10% of the disk, then great. But you end up throwing out 90% of the space you bought, since if you DO use it at all you loose the performance benefit. Also remember that access time does not really have anything to do with RAID level, so you'll get the same access times with raid 0/1/3/4/5/6 access times is related to rotational speed, armature movement, settle time being the top items. None of that changes with raid levels.

RBS
08-05-2008, 04:27 AM
Nice explanation. Thanks a lot stevecs, now all is clear. Regards.

tiro_uspsss
08-05-2008, 06:33 AM
*love* RAID0 - cant live without it! :up:

stevecs
08-05-2008, 07:17 AM
let me guess you like sky diving as well? :P

tiro_uspsss
08-05-2008, 07:29 AM
let me guess you like sky diving as well? :P

havent tried that tbh :D :p: - but I'd love to take up hang-gliding as a hobby if I could afford it :yepp:

:up:

Razmatazz
08-05-2008, 07:45 PM
I have read through this thread and found it very informative. Thank you all for the excellent information. Raid-0 portablility between mobos can be an issue. Is Raid-1 fully portable? Meaning can one just simply take one HD and put it into another system and have it work?

stevecs
08-06-2008, 03:58 AM
RAID portability is based mainly on the controller and/or software raid driver support. RAID-1 has the /potential/ to be more supported due to what it's doing (copying the same data to each drive) whereas the other raid levels use some form of striping so on a single disk the filesystem layout would be something like 1,3,5,7 opposed to 1,2,3,4,5, et al (assuming 2 drives raid-0 in first example, it gets more complex with other raid levels). If you have a hardware raid controller the benefit is that the raid logic is on the controller so if you move that with the drives from one system to another you should still have your array (assuming no motherboard/bios issues on the new system). this would be regardless of raid level. This does NOT mean you will still be able to boot the array if it's your OS drive as you may have driver issues for your new hardware but all the data would be there.

This is one of the reasons why I use hardware raid cards so I can swap out my MB and still maintain my data arrays, however this is /NOT/ a function of raid itself, it's just a function of it's implementation (on a hardware controller not tied to the OS, and taking in account it's a data volume).

Th3MadScientist
08-06-2008, 04:10 AM
I dont even bother with a hardware raid implementation that is connected to the motherboard. I rather have a Gigabit NAS that supports Raid 1. Sure I'm sacrificing some speed but it is still speedy. I get an entity that is apart from my computer so if my computer dies/doesn't want to turn on, my data is still alive, plus it can be accessed over my network.

Razmatazz
08-06-2008, 05:07 AM
Thank you for the response. However, how about 2 drives in raid-1 configuration? Would one be able to simply take one of the drives and move it into another system at any point as a single drive? Also can one go from one drive directly to a 2 drive raid-1 configuration at any point without having to start from scratch?

Swatrecon_
08-06-2008, 05:36 AM
If you're worried about losing data, run RAID 0+1, just make sure your motherboard supports it because controllers are way too expensive.

stevecs
08-06-2008, 06:00 AM
Thank you for the response. However, how about 2 drives in raid-1 configuration? Would one be able to simply take one of the drives and move it into another system at any point as a single drive? Also can one go from one drive directly to a 2 drive raid-1 configuration at any point without having to start from scratch?

Not always. What a raid controller (or even raid software) does is put the raid configuration data on the drive (to indicate that the drive is part of a raid set, it's volume set information, raid level information, plus any other defining characteristics. Some raid controllers/software put this at the front of the drive (first sectors) others at the end.

When you take a physical drive that was part of a raid set and then plug it into another system SANS raid controller/software the VTOC or partition information may be offset so the new system may not recognize that it has any valid layout. Generally if you use it as a data drive (non bootable) you should be able to find the partitions to pull your data off of it if it was a raid-1 but that's not going to always be the case. And you still have the issue that if your 'new' system is not exactly the same as the old one for a bootable disk (even if you were able to boot it/ raid information at end of drive) you will run into OS issues (more of a problem under windows/mac than linux but still there. Remember this is /NOT/ what raid is for.

If you want to be able to move data around AND still have redundancy then suggest a RAID-1 on the primary system and then use a product like Acronis or Ghost that would do a drive image backup to a 3rd disk that you can pull out and move around (it would not be part of the raid) you'll still have the OS hardware recognition problem but it would be bootable.

What are you trying to solve or mitigate?

Razmatazz
08-06-2008, 02:33 PM
Excellent information. I am not trying to solve or mitigate any problem, just trying to expand my knowledge. I am currently running 2 Seagate Barracuda 7200.11 in Raid-1 and have a third drive that I dump all of the important files onto that I want to back up (photos/music/home videos) none of the actual programs or OS. I am using Genie Backup Manager Home 8.0 I really like that fact that this program has the mirror option in order to back up the data which is what I use. It has been working great for 4 months. I had no trouble recovering the data after I switched from 32 bit Vista to 64 bit Vista. After reading the above posts, I feel much better about how I am currently configured. Thanks again.

Th3MadScientist
08-06-2008, 03:25 PM
For backup I would go with Acronis hands down, for cloning a drive I'd go with a Norton Ghost bootable CD, just a FYI instead of spending hundreds of dollars on harddrive partitioning software for Windows Server. Just use ghost to repartition. Tried it and works like a charm.

Razmatazz
08-06-2008, 04:36 PM
For backup I would go with Acronis hands down

Acronis is a good product for sure. It offers full, differential and incremental backups as does Geniesoft. Both Backup Manager and Acronis have received a number of accolades and Backup Manager has received top honors in several reviews. Backup Manager offers a mirrored backup which was attractive to me and is what I use. I don't think anyone would go wrong with either.