MMM
Page 4 of 8 FirstFirst 1234567 ... LastLast
Results 76 to 100 of 197

Thread: RAID5 file server build advice

  1. #76
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Nice! Got to love those simple purchase order procedures

    As for the drives, I would go with the WD GP one not the samsung. Both drives are not rated for 24x7 usage (probably only 8hours/day as most desktop drives) however the WD has a BER rate of 1x10^15 whereas the samsung has 1x10^14 that order of magnitude comes in handy the larger the drive.

    Also if you are going to think about a MAID setup (powering down drives when not in use) then the 300,000 start/stop cycles of the WD drive is going to be a huge plus over the 50,000 from samsung.

    It's been too long since I've been in a 'quiet' room so I can't really comment on the noise levels.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  2. #77
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Well, I've seen a video of the geekiest aussies (I think), and one of them had the dinning room converted to a server room with three racks operating 24/7 at full blast. You could hardly hear the guy when he was talking... So, I think I can understand that sentence about "quiet rooms". For you, anything short of having to shout to make yourself be heard in a server room would be a great improvement, right?

    Oh, btw, check this when you have a chance I know, I know, I'm a bad boy

    Yes, I'm planning on letting the drives power down when not in use. My guess is that will probably wear them down slower (and will also keep the noise and power draw levels low). Thanks for the reference on the start/stop cicles, though, I didn't even remember that... I'll probably go with the 500GB, 750GB or 1TB GreenPower drives (probably one of the first two, the 1TB version is €230 a pop, I can almost get two 750GB drives for that amount...), the 5400rpm speed won't kill the performance for what I want to do... hehe

    And yes, upper management here at home only says "yes" or "no" to generic budget requests, you just need to say what you plan on doing with the money (like: it's for a new graphics card, for a new ODD, etc.). Simple as that. However, "no" is a much more frequent answer than "yes", unless something is broken and needs to be replaced.

    Ok, now to get some sleep. I'm up since 7:50 am (~23:20 now), and I didn't stop for a minute the whole day... If I can, tomorrow I'll do some number crunching (to get just the parts I want), and this weekend I want to start planning the box

    Again, thanks for the help.

    Cheers.

    Miguel



    P.S.: I've pretty much hijacked this thread, didn't I? Sorry
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  3. #78
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    as for the benefits of MAID (ie, powering down drives not in use/and spinning them back up). I have found nothing that shows that this helps anything than perhaps power use. It's still a highly debatable topic, as this process actually puts a high amount of stress on a drive (power up/power down sequences) and how by doing that often affects MTTF ratings has not been studied.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  4. #79
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Hmmm... That's an interesting topic, stevecs. I never considered that, actually. On my notebook, I try keeping the drive spun down as much I can, because of the power draw. There are not many ways to go around that constant ~1W draw... On the desktop, I think Windows usually doesn't let the drives power down (except non-essential stuff drives) because of paging and such, so that is less of a problem.

    Now, on a NAS, that's another story... Especially because if the drives are on an array, the response time will go through the roof (like ~10 seconds to the first reply, because of Staggered Spin-Up, or half of that, on best-case scenario...). Perhaps the best thing will be to stick with slower GP drives, which already draw less power and output less noise, and keep them on a low AAM state, but without spindown. I'll keep that in mind, thanks.

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  5. #80
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    Well "MAID" is a first time hear for me!
    After a firs search on google I had to sift through "dirty latina maid" webpages to even get to the real thing.

    I ran into subjects like MAID 2.0, and SATAbeast, but haven't had the time to read about them exactly.

    Is this done via some utility you install on you OS?
    What are the conditions you must fulfill to implement this on a RAID 5 array.

    One of the reasons I will be going for 1TB GP WD RE2 raid drives is cause of power consumption.
    The fact that a home NAS dosn't have to be accssesible fast all the time maybe this will be worth exploring some more.

  6. #81
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Hmm. MAID 2.0 may be interesting. :P

    Generally it's implemented at the lowest driver level. For hardware raids it has to be on the hardware controller itself (some areca's handle it as well as LSI and a couple others). For a software raid, it has to be at the driver level to the HD and tied into the underlaying subsystem (dynamic disc et al) functions.

    For a NAS it would have to be supported on the NAS controller itself. I haven't seen it on NAS's yet but that doesn't mean that some vendor has it. Generally it's on the higher end setups now.

    But like I was mentioning it's still a highly debatable topic as to it's real benefits. Remember that most HD's base their MTTF ratios on a spin up/spin down cycle happening only 250 times a year. With a MAID that will increase so your failure rate will increase as well. Laptop drives have a higher start/stop cycle (usually around the 50,000 mark) but their MTTF values are <300,000 hours.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  7. #82
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    I must confess when you talked about MAID, I didn't understand it completely.

    However, after Janus's post, and also getting aroung the same "other" MAID uses , I must say there is nothing substantially new of what I've read...

    I mean, this is already implemented in many drives nowadays, and it is programable per-drive, and on-the-fly... I've been using that feature (or something VERY similar) for a couple of years now, in my Notebook, with NHC (Notebook Hardware Control), which not only controls AAM states but also APM states on-the-fly.

    Basically, AAM takes care of read/write noises, as we know.

    APM, on the other hand, tells the disk how to behave in idle mode: lowest state is full power down, meaning the disk will stop using power (I can get a standard Pentium M to 3+h autonomy with this one); then there's no spin-down, but with head retreat, which lowers drag; the last one (I don't have the laptop here to confirm the other one) is full-power at all times.

    Granted, it still needs HDD support, but it already exists, and some newer disks, from what I've read, can do it dinamically (like head retracting), just not using the full gamut.

    Also, depending on the controller, those settings might not be available. ICHxR, for one, supports it, but likes to control it directly, so it's basically the same thing as not having it. Software RAID arrays (like Windows) should have less troubles, provided the software developer could access the RAID drives independently, not just the whole volume.

    I really don't know how NHC (or a piece of software regarding NHC's HDD control page) would handle that, but I'd sure like to have software that could do it really on-the-fly (like, depending on the HDD/array load, or the idle time).

    As for NASes, if memory serves me right, I've seen some reviews over at Tom's where that feature seemed to be included.

    Finally, for server farms... Well, I really have mixed feelings about this approach... You see, the main drawback on this kind of setup will be availability (not considering potential MTTF issues, not really my field). If you want to use that on hot spares, that is perfectly fine, no harm done. A hot spare IS wasted power, sometimes for months, but with a powered down hot spare, it would still be hot, but with a ~10'' lag between disk failure and disk rebuild start.

    On live HDDs, though, it's another story. When I have my HDDs power down, I then have to wait ~10'' for them to start over, meaning I'll have information lag. Also, Windows freezes a little during HDD power-ups. In short, for instance if you're gaming, you may actually get yourself in a bad situation if your disk powers down during a game (been there, done that, and trust me, that's not pretty).

    With NASes it's the same thing, only much worse. Instead of having to wait ~10'', you'll have to wait "n"*~10'' (unless of course you have SS-up disabled and have a VERY robust PSU... goog luck trying to power up 32 3,5'' HDDs at the same time...). You could probably get away with that with home users with a large tolerance for lag, but as the space needs grow and the lag tolerance gets smaller, well, you get my point...

    Right now, as I've said, I'd like a piece of software to directly control APM, and have the best of two worlds: if the disks are idling for a long time, start droping the power states, and stop them if necessary (like 1h+ idling, or whatever the user tells it to), else let them work at full power (unless the performance isn't needed). Same thing for AAM. That way, you could get better noise/performance/power draw values, and still keep the MTTF in check.

    In the future, I'd like to see variable spindle speeds (like from 60 to 4200/5400/7200/10000/15000rpm), along with automatic (but still user-controlable) APM and AAM settings. I guess one can dream, right?

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  8. #83
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Yes, the concept is not new and there is some support. The thing is that it needs to be supported (generally) at the lowest level. In a software setup that is done by the OS driver as it can tell the drive to go to sleep basically. In a hardware raid that is not the case as the OS only sees the controller not the end drive which is why the controllers need the support. (there's nothing really stopping the industry to come up with a standard signaling flag though to do this, just hasn't been done that I'm aware).

    Powering drives off (ie, 'cold spare') opposed to a 'hot spare' is debate able. Basically the hot spare came about for two reasons 1) to cut down the user time to replace, and 2) to put the hours on the drive to hopefully see if the drive was functional. Newer raids take it a step further with distributed free space. Ie, there is no 'spare' drive per-se but the space of a spare drive is distributed across the entire array (like parity). This has the same effect of handling a drive failure but also has the benefits of increasing spindle count (the drive is a functional member of the array) and you are actually using it so you know that it does not have a latent error problem.

    But you've mentioned at least what the idea of MAID is (power saving) the question(s) that haven't really been answered (at least not to my satisfaction) are: What is the increased failure rate due to a MAID design? Is the saving of power for the 'down' hours really a saving when you are pulling several times the currant to spin up the drive. (depends on how many times it's polled). And When you add in the costs of spin ups/first block availability delays/higher failure rates, are you really saving any money in your TCO?

    But then again that's just me.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  9. #84
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Very valid points, indeed, stevecs.

    A standard would be nice for both APM states and a way to access them on a per-HDD base (you never know).

    Interesting point about "warm spares" (not hot, but not cold either... lol), but I'm sure something could be done about it (like having the drive on for some periods). And I had already read about distributed spares, which btw basically kills my "warm spare" theory.

    As for those power drawbacks, you know the saying: there is no such thing as a free lunch. Same thing here.

    First block availability is my biggest concern, right next to the whole MTTF because of increased spin-ups. I mean, you can go around the MTTF with variable spindle speeds and read/write head retraction, but once you power down, you'd need a cache as big as the powered-down drives to compensate for that power down (which is absurd, you'd be better off going SSD... lol).

    Probably in the future we'll be seing something like power-down for backup/infrequenly used data drives, not very lag-dependent and where the MTTF won't be severely affected with APM; intermediate states (head retraction and/or variable spindle speeds) for more lag-sensitive operations, and SSD/efficient hybrids for low-volume, high speed, very lag-sensitive data access.

    Oh, and btw, with optimization, stop-start states can still be cost-effective. If you remember the VW Lupo 3L (ultra-compact car, 1.2 litre TDI engine, with combined diesel consumption below 3l/100Km - one of the lowest yet EVER, only beaten by the VW 1L prototype, with - you guessed it - 1l/100Km combined diesel consumption), that car actually shut the engine off after 4'' (yes, 4 seconds) of being stopped with the brakes engaged; as soon as you depressed it, the engine started again. So far, I've not read anything that tells me that engine would die faster. It's not for everyone, of course, but it CAN be cost-effective. It just needs optimization...

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  10. #85
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    wouldn't hold my breath for a variable speed HD. The mechanical and logical work needed to handle that is _very_ complex which is why it's not done. Remember that due to the speeds and head height issues this is more of an aeronomics issue. then the logical/software issue is due to locating zones on a drive when rotational speeds change all the timing calculations need to be done at the new speed and still based on zoning/sector allocation, not really something for the on-board controller with limited processing power.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  11. #86
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Well, I'd be happy if there were only two or three speeds (which would lower the processing requirements), one of them being like "ultra low speed" only for conserving power, and obviously the drive needing to achieve the correct spindle speed before any data could be written/read.

    Granted, that would still introduce lag (though much less than a full drive stop), but it doesn't seem to me it would increase the computational power needed for the on-board controller too much.

    Variable speed spindles don't seem all that hard to do, but reliability might be an issue. And the "only read/write after correct spindle speed" would reduce controller complexity. Though I might be missing something important - appart from the reliability part, that is... hehe

    Anyway, as you say, I won't hold my breath. HDDs have been around for ages, and if variable speed HDs haven't yet appeared, there is something more to it. Unless manufacturers haven't really though of it because of the performance deficit HDDs usually have...

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  12. #87
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Remember that the disc heads are designed to 'fly' on the air current over the drives. (ie. the spinning of the disk provides the lift needed and the head is designed specifically for a certain lift (ie rotational speed) to level at a certain height. When you change that speed then you have a different amount of lift which could cause the head to crash into the disk.

    It would be much easier for the logic portion if it was just a couple fixed speeds for locating sectors on the drives but still that's something that has to be taken into consideration each time. Then you have the decision code as to when you increase/decrease speeds.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  13. #88
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Hmmm, I hadn't thought of the "air cushion" issue, thanks for reminding me.

    Yes, fixed speeds would be needed. It's a major pain as it is with fixes speeds, I don't even want to know what it would be like with constantly changing speeds. Besides, constant changing speeds would probably be unproductive on power draw... Motors, servos and whatnot always varying speed would be a major power hassle. The decision code could be left to the OS, as it is now (AAM, APM and IPM are all software-controlled/controllable, not left to HDD logic).

    Oh, well, too many variables to get right, probably it's just better not even thinking about it. If someone finds a way around this one, though, my guess it will be a breakthrough, and if it's affordable, I'll be all over it.

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  14. #89
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Hi again, guys.

    Since I've somewhat hijacked this thread a while ago (don't really think Janus minds too much, right? We were helping each other along the way), please allow me to 1) bump the thread and 2) ask some more questions, ok?

    After a very painfull loss of ~150GB of information (I don't think I'll ever be able to touch that HDD again, let alone format it and use it regularly, though the disk is perfectly fine), which stevecs kindly helped me out with, I'm back in business for choosing the parts of my NAS - I still have to wait for some of the money, though... hehe

    Since the last time I talked about this, I've made some decisions about hardware. CPU, PSU and RAM are chosen, HDDs too. The really big problem is the motherboard.

    So, I'm planning on an E1200 (once you go dual core, you can't really go back. Besides, a Celeron 420/430 is around €40, the E1200 is €45~€50... hehe), 2GB of RAM (hopefully, with more to come, but that will be ok for starters), and for RAID HDDs, I'll probably be using 750GB Samsing F1s (since the WD7500AAKS is too noisy and expensive, and the WD7500AACS is not available in Portugal...), which interestigly enough have about the same floor noise as the GP drives when idle... Nice!

    I'll also want a system drive, but that can be smaller, like 160GB (which most of them are sub-€50 right now, and 80GB drives only being like €5 cheaper...), though I haven't made up my mind right now (more on that later).

    As you can see, the only thing missing is the mobo. I'll need your help on this one, since I can't find any which fits the bill... Here's what I need:

    - mATX (space constraints, I'm going as small as I can);
    - E1xxx support (duh! lol);
    - 4+GB RAM support, with 4 slots, preferrably (so some 945GC boards are out of the equation, as well as NVIDIA 6x0i chipsets...);
    - Preferrably, dual channel memory support (VIA, SiS and NVIDIA are a no go, though the 630i might make it);
    - On-board Gigabit LAN controller (yes, I'm a cheap bastard... lol Also, those things usually can go as high as 300Mbps, and that's just about what I need);
    - If possible, with something like a Realtek 883/883 sound codec (for outputting two music streams to the home - this is still being considered, might not make it to the final version);
    - Lastly, but most important, 4+ SATA ports (VIA and SiS mATX boards only have two, as well as the 865G, and I won't be caught death near any of those... lol);
    - Affordable!

    The 4 SATA ports requirement is a MAJOR pain for me... You see, I've discovered the most widely - and cheaply - available SATA controller right now, the ICH7, cheats on SATA implementation... From testing, I've been able to figure out that those four SATA ports working in IDE mode (only one available) are actually something like a single SATA port, using a port multiplier... Using HDTune and two very similar 80GB first-generation SATA HDDs, I've measured the throughput as ~60+~60MBps on separate SATA controllers, and ~80MBps when both disks are on the same controller...

    So, and considering that the tests were made with both Windows XP and W2K3 (2K3 doesn't have the "only one per-system IDE request at a time" limit, as XP), can you tell me what performance hit should I expect when using 3 or 4 drives on the array? Or is there something I'm missing here?

    If that ICH7 limit really exists, and affects the performance of multi-drive setups, then I'll either have to go IDE on the system drive (not really an issue), or forget about 945G/GC/G31 motherboards (cheapest ones available... hehe), since NOT ONE OF THEM has ICH7R on the southbridge...

    That would also mean I'll be stuck with very few boards, like some G965 from Gigabyte, the GA-G33M-DS2R, also from Gigabyte, the P5E-VM HDMI (MAJOR overkill for a NAS... lol), or something X1250 or 630i-based, or switching over to AMD... However, I don't know how the SB600 and the 630i SATA ports work, and I don't seem to be able to find any info on them (except for SB600 on AMD, and early days at that...).

    So, in short, what should I do? I'd be tempted to go X1250 or 630i (more so with the X1250, though it is a little more expensive than most 630i-based boards) if I knew they had good performance - well, relatively speaking, of course... Software RAID 5 through Gigabit will be limited... hehe

    Also, depending on the solution, what do you recommend for a silent system drive? I was thinking something like the WD1600AAKS (single platter), or its IDE variant.

    Btw, if you can get the same thing at roughly the same price for AMD, please do tell me. 780G would be very nice (6 SATA ports and insanely small power draw... hehe), but I think that will be too expensive, since AMD boards usually are more expensive than Intel boards, and the X2 3600+ is ~€20-€25 more expensive than the E1200...

    Thanks in advance for the help.

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  15. #90
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    No, I dont mind.
    I believe this threat has become one of the better ones on the web regarding this subject. It's quite a gem.

    I would go for GA-G33M-DS2R. I know I will.
    I don't know why you say it would be an over kill
    It has all solid caps and that can only help in the longevity of the board.

    I do have an AMD board with 630A chipset and 7050 gpu made by asrock.
    http://www.asrock.com/mb/overview.as...VENF7G-HDREADY
    I got it cause that one had good under-clocking abilities and used the least power in the first place.
    I got that info from a German forum where they were building sub 20W pcs with this Board and low end AMD cpus.
    It works okay for now but who knows how it would work in 24/7 and all that.

    I power it up using a PicoPSU 80W now and I have 2x3.5"hdd + 1x2.5"
    I tried even with 4x3.5 + 1x2.5" and it starts up great!

    OT: look how many first letter "I" there are in my post.
    Last edited by XS Janus; 03-09-2008 at 09:24 AM.

  16. #91
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Well, having stevecs on a thread about HDDs and RAID does help transforming it in a gem (ok, ok, I'll stop kissing stevecs's a** now... lol). Now, how about a sticky? lol It would be nice...

    Btw, thank you for letting me post here. Come to think of it, right now we have basically everything from software RAID-5 to hardware RAID-5 implementations, as well as stripe size discussions AND power draw considerations... Nice!

    For me, the GA-G33M-DS2R (like the P5E-VM HDMI) seems like overkill because that's the kind of board I'd buy for a GAMING rig (a very compact gaming rig... lol). My main rig is a P5B-Deluxe, and the G33M or the P5E-VM mop the floor with the P5B... Also, my "media viewing" PC is 945G-based, and it does feel weird to have such a powerful board powering a NAS... hehehe

    Hmm, those sub 20W PCs sound like a blast... What kind of CPU are you using for those? Semprons? Not X2s, right? Still, VERY interesting... That kind of stunt isn't possible here in Portugal, the PicoPSUs are not available

    Ok, now you got me thinking about using something like RMClock Pro AND SetFSB (or whichever software you can use to OC on-the-fly on AMD systems) to get the system to stock speeds in case of prolonged full power... Damn, I wish I lived on the US...

    Oh, btw, since you have that board, would you mind testing it for maximum SATA throughput? Like try the three (or four) drives and HDTach/HDTune them at the same time, and then only just one, to check for interface bottlenecks. Since that's a 630a, and the 630i seem to have the same IOCH, I'l know what I could expect...

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  17. #92
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    I don't know about the sticky. It would be nice though. But I'm not sure who can nominate a thread for that sort of stuff.
    But I sure wouldn't like to see all this info go into oblivion soon as this thread really does have all sorts of file server related info.

    I did some measurements using HD Tach.
    3 drives were tested as those are now in my WHS server. WD 10EACS, Seagate 200GB (cant remember which now) and a 2.5 Fujitsu 5400rpm 160GB drive.
    I did the test twice all the drives at the same time with little differences showing.
    The combined average read was 146.5 MiB/sec and seek was 24.5 sec average per drive.
    I'm not sure what this shows, though. But am interested to find out.

    Regarding low power...
    The guys on German forum were using this asrock board and AMD BE dualcore chips + one notebook drive all this powered by Pico PSU and some AC brick thet they proved to be slightly more efficient than those offered by Shortcircuit.com and mini-box.com
    They even downclocked the systems to around 1-1.2GHz
    -I followed their components and even used a new LE named 1core Sempron but my power draw never went below 23W with just a 2.5" system drive, even when downclocked to 700MHz (I think 600 MHz was my lowest )
    Why I haven't reached their sub 20W score? Might be that my cheap power meter is bad at showing those type of measurements, or that a brick would make that much difference (which is not so likely cause their brick was just slightly more efficient)
    I bought my PicoPSU form shortcircuit.com
    So to sum up: my goal was a low power high capacity WHS.
    I was running at 29-31W using 1TB GP drive form WD. The room in my small and thin case allowed me to put up to 3x 3.5" and 1x2.5" ( I modded it that way) And all that would run at cca. 45-50W.
    Not bad for 3TB storage!

    But my plan was first derailed when i noticed my file copy was dead slow and when I upped the cpu cpeed the system recovered.
    If only that was my issue I wouldn't consider new RAID5 server but file corruption and the risk of losing data from HDD failures promped me for a new project.

  18. #93
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Ok, guys, finally I'm able to get some time to reply to this thread...

    Last time I had the time was something like last Monday, but then my laptop crashed during hibernation... Sorry about that.
    Quote Originally Posted by XS Janus View Post
    I don't know about the sticky. It would be nice though. But I'm not sure who can nominate a thread for that sort of stuff.
    But I sure wouldn't like to see all this info go into oblivion soon as this thread really does have all sorts of file server related info.
    Well, for a thread to go sticky, it will first need to get attention from an Admin, and then to be considered "relevant material". And, as with Admin status, you can't just tell an Admin "oh, look at me, I'm Admin material, please bump me up the food chain" - that will actually do you worse -, so we'll have to wait and see if an Admin picks up this thread.

    Quote Originally Posted by XS Janus View Post
    I did some measurements using HD Tach.
    3 drives were tested as those are now in my WHS server. WD 10EACS, Seagate 200GB (cant remember which now) and a 2.5 Fujitsu 5400rpm 160GB drive.
    I did the test twice all the drives at the same time with little differences showing.
    The combined average read was 146.5 MiB/sec and seek was 24.5 sec average per drive.
    I'm not sure what this shows, though. But am interested to find out.
    Thank you for the tests, they do help me.

    I'm assuming you didn't notice "flat" transfer rates in the beginning 10~20% of the drives, right? The charts were round and smooth (well, you know what I'm referring to, right? ), as they should be, were they not?

    If they were normal graphs, that tells me without a doubt the 6xx series NVIDIA chipsets, and probably also the 700/800 series, have per-port SATA controllers, as opposed to the ICH7-base southbridge, which makes them good integrated controllers to build my NAS around. Not that I'd be starved for performance with 80MBps sustained, but I just don't want to be capped that way...

    Quote Originally Posted by XS Janus View Post
    Regarding low power...
    The guys on German forum were using this asrock board and AMD BE dualcore chips + one notebook drive all this powered by Pico PSU and some AC brick thet they proved to be slightly more efficient than those offered by Shortcircuit.com and mini-box.com
    They even downclocked the systems to around 1-1.2GHz
    -I followed their components and even used a new LE named 1core Sempron but my power draw never went below 23W with just a 2.5" system drive, even when downclocked to 700MHz (I think 600 MHz was my lowest )
    Why I haven't reached their sub 20W score? Might be that my cheap power meter is bad at showing those type of measurements, or that a brick would make that much difference (which is not so likely cause their brick was just slightly more efficient)
    I bought my PicoPSU form shortcircuit.com
    So to sum up: my goal was a low power high capacity WHS.
    I was running at 29-31W using 1TB GP drive form WD. The room in my small and thin case allowed me to put up to 3x 3.5" and 1x2.5" ( I modded it that way) And all that would run at cca. 45-50W.
    Not bad for 3TB storage!

    But my plan was first derailed when i noticed my file copy was dead slow and when I upped the cpu cpeed the system recovered.
    If only that was my issue I wouldn't consider new RAID5 server but file corruption and the risk of losing data from HDD failures promped me for a new project.
    Well, those "BE" chips have lower TDP at load than Semprons (45W vs. 65W, if memory serves me right), and also have a few design differences from standard X2 parts to make them very power efficient at idle (and, at least on the first chips, those design differences also somewhat killed the extra OC performance you could expect from a lower TDP part...), so it would be expectable for them to behave better than Semprons (which, btw, are usually not as screened as the BEs for low power... Their voltages should be considerably higher than BE parts). That brick shouldn't cause massive power consumption differences, maybe 5W tops (and I'm being VERY generous here...), probably 1~3W is more likely, given the already low-power consumption. But you did tinker with the CPU voltages, right? If not, then that Sempron would probably send you to sub-15W consumptions

    Still, it IS impressive to build a sub-20W dual-core machine (or sub-50W for that matter) using mainstream parts (albeit massively underclocked). I just wish those special extra parts (PSU, brick) were more readily available... They're insanely expensive, and most people would preferr to simply use a less efficient PSU and be done with it... Maybe someday we'll get there.

    Well, even if you can't go the WHS route because of the corruption issues, you could still go W2K3+software RAID and still be under 100W power consumption if you used F1s or GP drives. Probably a 120W PicoPSU+Brick would be able to handle a system like that (one of my dreams, really...). But that Areca (I think you said you'd go that route, right?) card will give you a LOT more options. And still probably well under the 150W consumption, right? Not that bad.

    Good luck with that build. I'm just waiting for the money to become available to start buying stuff. Hopefully when I get around to do it some new CPUs and boards will be available (like dual-core Semprons, those would be sweet), and my choice will be easier (or not... lol). In the meanwhile, do keep posting RS600, 700A and 800A SATA performance figures, if you come across them, ok?

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  19. #94
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Ok, guys, a small update here.

    I'm really considering going over to AMD for this build. I've been checking things out, and this is what I've come to understand:

    - an Intel-based system would need an E1200 (~€50) plus either a GA-G33M-DS2R (~€95) or 4Core1333-FullHD (€75~€85), since basically everything else lacks at least one thing I need/want (AHCI, independent SATA ports, or Dual Channel). Best choice would be the ~€145 combo (E1200+Giga), but the CPU could be a bottleneck on parity calculation (only 512KB cache...);

    - an AMD-based system would need an X2 3600+EE (~€42, very limited availability), an X2 4200+EE (~€60, also limited availability), or even a BE-2300 (~€70) - oddly enough, the X2 EE parts seem more power-efficient than the BE ones... -, with something 7050 or 780G-based (~€57-€77, ASRock and Asus, respectively). Worst-case scenario would go about the same price as the Intel combo, but with more performance (I think), and probably less power consumption (not sure on this one, I'm getting VERY contradicting reports on this one...). I've even seen the 780G being shown as more power-hungry than the 690G, 690G and 7050 swapping places and even a G33+E4300 combo being less power hungry than a 780G+4850e combo... Best HDD controller prize seems to go to the ICH9R, followed by the SB700/600 (albeit without RAID-5 support...)

    What do you think about this? Which combo would have better price/performance and price/power draw ratios? I want to keep the system low-power, after all...

    Thank you in advance for your help, and once again sorry for taking over the thread. But I still think it's relevant to the topic title...

    Cheers.

    Miguel


    [EDIT] Man, and I thought Intel had too many part numbers... I'm completely stumped on which AMD X2EE CPU to choose from... Should I go Windsor EE or Brisbane EE? They seem just about the same, albeit the lower cache, the odd and half multis and apparent higher voltages... Weird... [/EDIT]
    Last edited by __Miguel_; 03-20-2008 at 04:04 PM.
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  20. #95
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    I will go for Intel platform this time cause i trust the my choice p35 chipset more.
    It's easier to make a long term platform. IMO that is.
    I'm not well versed with AMD cpus these days, there are to many variations of the same.

    My little WHS is now on AMD cause I was experimenting how low the power usage can go. I concluded that it was cca 5-8W less and that is with this super low power asrock board. Other boards were showing more power hungry so I guess they would nullify the difference.
    Intel system was running a Giga DS2 board and a new celeron 420, before.

    I don't think intel platform would be bottleneck with E1200 CPU.
    That is unless software raid is not written for dualcore.

    In the long run I don't thing it would matter much which platform you choose.
    Since you are going software raid I would choose the one with best warranty and long term parts availability (like the same mobo in a few years if needed)
    Maybe Intel mobos have the upper hand in this, I don't know for sure.

  21. #96
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Ok, Janus, thank you for the input. I do have to agree with you on that, even if LGA775 is actually being replaced on the short run. Do you know if the DS2R underclocks well?

    Today I went out and actually asked a store owner which one would he recommend. He pointed me to a Nexus LX external box... Yeah, riiiight... Seriously, I don't know why I still even bother...

    Also, sub-1TB WD GP drives are a no go (most stores don't even know they're available, suppliers suck and don't actually tell stores which model they're buying, only capacity, and also buy based only on capacity, so we're still stuck with first-gen 750GB drives...). So probably I'll have to stick with 753LJs (which seem nice, almost on-par with the GP drives on both noise and heat output).

    I guess the choice is almost complete, then. Only memory and PSU missing, plus assorted parts (cables, LEDs, the actual case I'll have to build, damping mechanisms for the HDDs...).

    Oh, right, I almost forgot: which HDD for system drive? WD1600AAJS, something more silent on 3,5'', or a 80GB 2,5'' drive?

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  22. #97
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    DS2R will underclock like a true champ!
    It has vCore voltages as low as 0.5v! As well as other things you can undervolt. You can't beat that.
    The limiting factor here will be a 6x multy on Intel Cpus.
    All the other expert features are included in the bios like they are in the bigger and more expensive boards.

    On the system drive: I'm thinking about that now as well.
    I would like to go for a single platter drive. Is this WD you suggested single platter?
    I think stevesc would not recommend a 2.5" drive for 24/7 cause those are mostly meant and built for notebook usage and tend to fail maybe in RAID1 for reliability (but an expensive option with little power benefits IMO).
    I can't eve make up my mind on the capacity of the system drive.
    single platter 160gb or single platter 320gb?
    I want to run torrents on the server to give my notebook a rest. But seeding would be a problem on a 160gb drive.
    Maybe I shouldn't put my torrent folder on a system disk at all. What do you think?

  23. #98
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Quote Originally Posted by XS Janus View Post
    DS2R will underclock like a true champ!
    It has vCore voltages as low as 0.5v! As well as other things you can undervolt. You can't beat that.
    The limiting factor here will be a 6x multy on Intel Cpus.
    All the other expert features are included in the bios like they are in the bigger and more expensive boards.
    Well, that's the second good news I've had today... I seriously doubt any Intel CPU can go as low as 0.5v, but it's good to know the option is there. I don't know how the 6x multi will be limiting, though... If you manage to get the FSB to 100MHz, an E1200 would still be a 600-800MHz CPU, more than enough for basic applications. I'm not that sure about the software RAID performance, though (but it is true even the mighty 955EE only needed about 5% to get 40~60MBps sequential on RAID-5...

    Also, the CPU would need to support lower FSB speeds. The E6x00, for one (my main rig has a B2 stepping E6400), doesn't like 200MHz (and lower) FSB speeds AT ALL (won't POST, and it will hang using Windows-based OC software on a P5B Deluxe...).

    At the end of the day, I'd prefer to use stock FSB speeds, but go as low with the vCore as possible. I might need help to have the system run with both C1E/EIST AND lower-than-stock voltages... The DS2R has a "de-overvoltage" option, right? I'm guessing that allows EIST/C1E to work, but offsets the requested vCore by a certain amount (which would be just what I want).

    Quote Originally Posted by XS Janus View Post
    I would like to go for a single platter drive. Is this WD you suggested single platter?
    I think the WD1600AAJS was actually the first single-platter 160GB drive available on the market. It debuted around one year ago, if memory serves me right. I'd like to know if there are other options, though...

    Quote Originally Posted by XS Janus View Post
    I think stevesc would not recommend a 2.5" drive for 24/7 cause those are mostly meant and built for notebook usage and tend to fail maybe in RAID1 for reliability (but an expensive option with little power benefits IMO).
    I have to agree with you on that one. stevecs would definetely NOT recommend a 2.5'' drive as a system drive. Unless, of course, it was a SAS drive... I was thinking 2.5'' because of power and noise concerns only.

    As for RAID-1, it is a good idea, really. But system drives on non-critical servers might go without such configuration, provided that you have updated image files of the system partition. I'm considering going that way, to keep costs, power draw and noise in check.

    Quote Originally Posted by XS Janus View Post
    I can't eve make up my mind on the capacity of the system drive.
    single platter 160gb or single platter 320gb?
    I want to run torrents on the server to give my notebook a rest. But seeding would be a problem on a 160gb drive.
    Maybe I shouldn't put my torrent folder on a system disk at all. What do you think?
    Well, that's just about my own problem, the torrent one, that is. But let's tackle this in separate steps, ok?

    First: having Torrents on your system drive: yes or no?

    Honestly, I've been doing that for years now, without a hitch. My main server has a single Samsung 321KJ (320GB) drive with three partitions (system, WSUS and data) working 24/7 for the last two years and a half years (well, the 321KJ is fairly new, before that there was a Hitachi 250GB drive, and before that two 80GB drives). None of them had issues, I changed them mainly because I needed more space (and to lower noise output), and every one of those drives is still operational (even one of the 80GB ones, which came from a failed RAID-0 array, and that I honestly think came with bad clusters - the reallocated sector count is through the roof, but no problems until now).

    So, in short, and considering the OS itself will need little HDD access besides booting, and is not very latency or speed sensitive (once it's up, it will stay up), I think that you can get around such a configuration. Else it would simply be a waste of HDD storage space... lol

    Second: how big of a system drive?

    Seriously, is this even a question? lol

    Ok, I'll stop jerking around. For me, 320GB actually became too big right after I started offloading some files around, and I'm only using a ~250GB partition for both incoming and temp P2P folders. I'm no expert, but even with a 16Mbps/1Mbps connection, anything in excess of 100GB of P2P data on a drive seems waaay too much. You simply can't manage that many data files efficiently. Better keep the list short, and reseed if asked to/needed. But, as always, YNMV.

    I'll most likely be using the following configuration:

    - 160GB system drive (80GB ones are not that much cheaper, and are usually worse/older/louder parts), with two partitions (not needing WSUS, and even if I want to go there, I'll just create a folder, it's much more practical), OS and Data;

    - Any P2P software running will offload the completed downloaded files to the RAID array on off-peak hours (I'll need a sorting and/or auto-move software for that one, any ideas?), to keep acesses as sequential as possible, and also to maintain fragmentation low (I don't even want to know how much time it will take to defragment a 1TB+ software RAID-5 array... I'll have to do it over the hollydays... lol).

    So, in short, single-platter 160GB seems to be a good system drive, even if you do want to have it do more stuff than just "sit there and look pretty"

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  24. #99
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    I also wondered about auto move function.
    Do you know if you choosing "auto move when finished" option in torrent programs will prevent you from seeding that file as soon as downloaded has finished? (seems logical it will and that could cause issues with my ratio)

    It looks that 160 could be enough now. BUT as I find myself DLding more and more HD and huge torrents this "could" be a problem. My notebook drive is constantly full. However I never seed that much so offloading files to the array sooner would be an answer for a 160GB drive.
    This said If I find a way to buy a WD320AAKS and be sure I'm getting a single platter one I will probably go for it and not look back.

    I think I would prefer manual offloading of files to. I don't think I could trust software for this, much. I mean moving files in windows is bad enough but moving it via got knows whose code... :/
    Manual "Copy" option is my preferred way. Burned myself to many times enen with manual "move". One glitch and BAM! your directory/file is fried. whats been moved and what was not and what is complete is anybody's guess at that point. Joy for the whole afternoon...aahh...good times

    Beside maybe partitioning 320GB as let say 80GB for system + rest for all other stuff would not be a bad idea.
    We'll see... first I need to get my stuff back from RMA and figure out what went wrong with my 1st attempt to assemble it. (thank God I haven't bought the controller card or all the drives, cause I don't think they would have believed me they were all a bad batch, if you know what I mean )

  25. #100
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Quote Originally Posted by XS Janus View Post
    I also wondered about auto move function.
    Do you know if you choosing "auto move when finished" option in torrent programs will prevent you from seeding that file as soon as downloaded has finished? (seems logical it will and that could cause issues with my ratio)
    Well, I rarely use Torrents, so I'm not very familiar about that "auto-move" function, sorry. But, unless that function moves the files and updates the "read torrent+data from..." part (which would cause severe random reads on the RAID-5 array, and that's NEVER a good idea with RAID-5, hardware or otherwise), you'll automatically stop seeding that file, so yeah, that would pretty much kill your ratio.

    Quote Originally Posted by XS Janus View Post
    It looks that 160 could be enough now. BUT as I find myself DLding more and more HD and huge torrents this "could" be a problem. My notebook drive is constantly full. However I never seed that much so offloading files to the array sooner would be an answer for a 160GB drive.
    This said If I find a way to buy a WD320AAKS and be sure I'm getting a single platter one I will probably go for it and not look back.
    What you said, only difference would be that the price should be about equal for both parts... hehehe

    Quote Originally Posted by XS Janus View Post
    I think I would prefer manual offloading of files to. I don't think I could trust software for this, much. I mean moving files in windows is bad enough but moving it via got knows whose code... :/
    Manual "Copy" option is my preferred way. Burned myself to many times enen with manual "move". One glitch and BAM! your directory/file is fried. whats been moved and what was not and what is complete is anybody's guess at that point. Joy for the whole afternoon...aahh...good times
    lol I've never had issues with manual moves before. Unless on those files that were corrupted to start with (one of my server's memory sticks started missbehaving... it took me months to figure why I had to download some stuff twice, and sometimes video files had glitches...).

    Granted, I don't move whole folders, only individual files (or groups of files), so that may well be the reason I have less problems... Though I'll keep that in mind.

    Quote Originally Posted by XS Janus View Post
    We'll see... first I need to get my stuff back from RMA and figure out what went wrong with my 1st attempt to assemble it. (thank God I haven't bought the controller card or all the drives, cause I don't think they would have believed me they were all a bad batch, if you know what I mean )
    Well, if you had both, you simply had to wait to install them again, unless you couldn't point the origin of the hardware. THAT would be a tough sell indeed, one top-of-the-line RAID controller AND 3/4/5 über-capacity (and price... lol) drives all failing at the same time... Can you say "not happening"?

    Btw, one thing I've been thinking (run for the hills! lol): RAID controllers are basically dedicated CPUs and memory for the parity calculations (not even needed when reading, unless if the array is cripled). Also, it's very clear that parity calculation takes a fraction of today's CPUs computing power (see the links I posted a while back).

    So, my point is, since there are usually no backup bateries on CPUs like there are on hardware RAID controllers, to keep data from being lost after a power outage, software RAID most likely calculates parity for every block written, and sends it immediately to the drive, to minimize data loss, without even considering storing it on RAM. This, of course, creates abismal performance for software RAID, when it could actually be the fastest configuration available...

    So, am I too off on this? I don't think so...

    Also, would it be possible to rewrite Intel or Microsoft's RAID driver to actually use system RAM as cache, before sending the data to the array? This would open up insane performance boosts on software RAID...

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

Page 4 of 8 FirstFirst 1234567 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •