MMM
Page 6 of 8 FirstFirst ... 345678 LastLast
Results 126 to 150 of 197

Thread: RAID5 file server build advice

  1. #126
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Quote Originally Posted by XS Janus View Post
    Oh my friend but there is a bigger brick a Dell 220W brick
    Man, THAT is one SWEET brick. 220W is just colossal for a power brick...

    I can't imagine how much noise that thing produces, though... Is 120W bricks are noisy, that must sound like a jet plane taking off... hehehe

    Quote Originally Posted by XS Janus View Post
    OK, so now because you are so resourceful and good a digging specs up and doing math you can maybe solve this idea of mine:
    Thank you for the compliment.

    Quote Originally Posted by XS Janus View Post
    After re-reading your posts Miguel I realised I asked my Q the wrong way.
    Could you help me calculate the needed amps that need to be provided to power my base setup + controller + 10 total GP drives?
    Well, first I'll need to figure out exactly how much power does the Mobo+CPU+Memory combo draw on each of the lines.

    Then, I'll actually need to know how much power do the WD drives draw on spin-up, to make sure I won't cross any boundaries just by powering up the drives (it CAN happen... remember we're talking VERY low numbers here...)

    The first one I cannot know for sure. You can help there, though... It seems it's possible to measure how much power is being needed through the connectors (not sure how, last time I took physics was like 10 years ago...):

    Quote Originally Posted by The Tech Report
    For our power consumption tests, we measured the voltage drop across a 0.1-ohm resistor placed in line with the 5V and 12V lines connected to each drive. Through the magic of Ohm's Law, we were able to calculate the power draw from each voltage rail and add them together for the total power draw of the drive.
    Probably, sticking a multimeter in there and just measuring how much AMPs are being sucked can also work... hehehe Just be sure to measure both motherboard connectors, since the 4(8) pin one is a different rail from the 20(24) pin.

    So, if you can help out there, we'll see just how much power your motherboard actually needs. Do be sure to measure at full load, though, else it may botch the results.

    I do have, however, a figure for spin-up for the 1TB drives (I'll assume it's the same for the 500GB, I shouldn't be too off), courtesy of STRC (StorageReview.com): 1.36A+0.96A (12V+5V) EACH.

    Also, I'll have to take a look at the M4's manual for the power ratings. They're actually VERY good. 12A on the 12V rail (if you're inputting 11-16V to the PSU), 15A(!) on the 5V line.

    Quote Originally Posted by XS Janus View Post
    Also have you find out any other numbers on just how efficient is M4?
    From the manual it looks to be even better than Pico 120W, but I won't need 50% capacity (hopefully)
    How much lower could it be at let say 40%?
    The M4 is rather new. A couple of months old at best, probably. I didn't find anything on it yet, not even on efficiency values. But perhaps I didn't search enough...

    Also, do keep in mind that, with your current setup, and if my last calculations were correct, at peak consumption you'll already be on the danger zone for the Pico120, which is a 140W max PSU, will ALL the rails accounted for (I don't think it would last long with full ~200W - peak - load on all rails, really...). So, you'll surely be at 50%+ capacity with the M4...

    So, let's do some maths, ok?

    So, I'll need to make sure the Mobo+CPU+Memory+(n-1) drives powered up and idling keep below (12A-0.35), so the last powering-up drive (due to staggered spin-up, remember? All the other will be powered up by then) doesn't overload the PSU too much AND normal operation falls within the PSU specs. Actually, that math is wrong because those are worst-case values, but it's best to work with full power figures, just in case...

    So, "Start->Run->Calc", and you get:

    12V line (12A available, peak at 16A for sub-30'' periods)
    4A for the Mobo+CPU+Memory (pending)
    1.5A max for the controller
    3.28A for the 1+9 drives
    1.36 for the last powering up drive

    Total of 10.14A peak when starting up the HDDs (actually, that should be more like 8~9A, because of that "worst case scenario" I was considering.

    5V line (15A available, peak at 20A for sub-30'' periods)
    3A for the Mobo+CPU+Memory
    6.87A for the 1+9 drives
    0.96A for the last powering up drive

    Total of 10.83A peak when starting up the HDDs (again, that should be lower, probably on the 8A range).

    After all this, we also have to consider three other things:

    1) Power draw on the +3.3 and +5VSB lines (not usually much, but better make sure, because of what I'm going to say next);

    2) Max output of the 220W brick, which is 18A@12V, meaning only ~3A(@12V, which isn't actually all that bad) will be available for other lines in this worst-case scenario, so you'll have to do OTHER maths for those lines;

    3) Power efficiencies - I'm not taking those into account, especially the power brick efficiency limitation, so you might need to check on that, ok?


    In short, if I didn't mess anything up you should be able to run that server off an M4, plus a 200W+ brick.

    Hope this helps.

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  2. #127
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Braga, Portugal
    Posts
    345
    Oh, btw, I assumed you wanted 10x1TB + 1x500GB drives on your server. If that's not right, you'll need to subtract the appropriate values for the 500GB drive on the "3.28A" and "6.87A" lines (which become 3.06A and 6.17A).

    Oh! I almost forgot! I can't stress this enough... YOU WILL NEED STAGGERED SPIN-UP! There is NO WAY a system like that will power up all at once.

    Cheers.

    Miguel
    Don't forget to visit http://www.xtremesystems.org/forums/...play.php?f=214. Stop wasting CPU cycles and FOLD. Put your extra CPU power for a good use.

  3. #128
    Xtreme Member
    Join Date
    Jul 2007
    Location
    now
    Posts
    242
    wow, just read this entire thread. it's a lot to digest, and lots of it went right over my head... but good info here.

    currently, i'm planning a 3-5TB media server build & have considered lots of routes. plus, i have lots of spare parts looking for a home, and I may salvage other parts from the rig in my sig.

    should i start another thread for build advice?

    and what are you guys doing about backups? i really want an easy to maintain, automated backup strategy for my critical data (eg - family photos, videos, master recordings from concerts (i was a 'taper'), and misc files). I simply cannot lose this data. it's currently copied onto 3 separate drives (one is external). approx 500GB today. I expect it to grow to 1TB in the next 12 months; probably approaching 2TB in 24 months. (got an HD camcorder (Sony SR11), plus I want to backup all ~70 hours of my SD miniDV tapes)

    My ideal goal would be to have something like the icydock MB455 coupled with an extra tray (MB435TRAY is the part number). then, use two 1TB drives solely for two-generation redundant backup. so, one 1TB drive will be in the bay as "current" backup (automated process - no idea how to do this yet.. acronis?). And the other drive will be offsite at my office.

    At some interval (weekly or monthly), I'll rotate the drives between the home machine and my office. the theory is that when i put the "old" drive in the bay, it will be rebuilt with current data. then, i take the other drive up to the office. that sounds like a good plan???, but I have not figured out the details to implement it yet. and i'm begging the question of whether or not this even makes the most sense. the key is that is has to be easy and automated, or i'll never do it.

    the big caveat to the drive rotation strat is that the size of my backup is limited to the largest available single drive (1TB as of today).


    thanks for all the info in this thread!
    Last edited by lefy; 05-02-2008 at 01:08 PM.
    ------media machine------
    Q6600 L737B || EVGA 780i || 8GB G.Skill DDR2-1000 PQ || 2 x EVGA 9800 GX2
    150GB Raptor (Vista x64 SP1) || 3 x 640GB WD6400AAKS RAID0

    LG Blu-ray/HD-DVD Combo || Dell 3008WFP || Corsair HX1000W || Rocketfish
    Air Cooled with ThermalRight Ultra 120 Extreme

  4. #129
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    Quote Originally Posted by lefy View Post
    wow, just read this entire thread.

    j/k

    You sure can seek advice here freely. Please do so.

    I did some RAID0, RAID5 tests on 4xWD1000FYPS drives and will post them shortly.

    As for backup, this is stevecs's reply to a recent thread on the subject:
    Quote Originally Posted by stevecs View Post
    Acronis does not like sofware or 'fake' raid setups (on-board raids). It's generally good (and so is ghost, among others) for hardware based raid's as those present the raid drive to the OS as a scsi volume. the software does not need to know anything really about it.

    I use acronis here for backing up my areca's and LSI raids. (well the boot raid's, the main areca one is just a tar/bacula backup of the data to tape).
    Thread here

    I will try and digest your questions in a little while. I'm busy installing Vista now.
    Last edited by XS Janus; 05-02-2008 at 01:13 PM.

  5. #130
    Xtreme Member
    Join Date
    Jul 2007
    Location
    now
    Posts
    242
    So, backing up a bit... you can probably tell I'm really trying to accomplish two different goals: building a media server and creating a backup strategy.

    Can I accomplish both goals with one build? Please say, "yes"


    thanks for that link & letting me jump in with some of my own questions!
    ------media machine------
    Q6600 L737B || EVGA 780i || 8GB G.Skill DDR2-1000 PQ || 2 x EVGA 9800 GX2
    150GB Raptor (Vista x64 SP1) || 3 x 640GB WD6400AAKS RAID0

    LG Blu-ray/HD-DVD Combo || Dell 3008WFP || Corsair HX1000W || Rocketfish
    Air Cooled with ThermalRight Ultra 120 Extreme

  6. #131
    Xtreme Member
    Join Date
    Jul 2007
    Location
    now
    Posts
    242
    Here are some spare parts I have:
    CPU: Q6600 (L726A)
    Cooler: Thermal Right Ultra 120 Extreme
    RAM: Crucial Ballistix - DDR2-800 4 x 1GB
    SODIMM: Crucial Laptop RAM - DDR2-667 2 x 2GB
    Optical: Pioneer DVR-212D
    Case: Rocketfish Case
    PSU: PCP&C 1KW-SR, willing to swap with the HX620W, but means I also need a new case for my main rig. which is fine

    If I use all of that, this is what I need:
    Drives: 4 x Samsung F1 1TB HD103UJ (+ $320 if I go enterprise drives)
    Mobo: not sure, but willing to salvage my IP35 Pro and upgrade the sig-rig
    VGA (if not IGP): something < $100. Probably ATI 2600 XT
    Hot Swap Bay + Extra Tray: icy dock MB455

    RAID Controller: I wanted to try ICH9R, but everything I read indicates I should go for a hardware controller. plus, I've got some 2GB sodimms. so, I was looking at the Areca 1231ML, but at $850 it's about double what I want to spend. I'd do it if it's really worth it... whatever that means.

    OS: Ubuntu, Mythbuntu, WHS... not sure yet. A primary function of the server is to feed my HTPC content... someone told me to check out mythbuntu & MythTV.


    Any recommendations? don't use the spare parts for this?
    Last edited by lefy; 05-02-2008 at 03:15 PM.
    ------media machine------
    Q6600 L737B || EVGA 780i || 8GB G.Skill DDR2-1000 PQ || 2 x EVGA 9800 GX2
    150GB Raptor (Vista x64 SP1) || 3 x 640GB WD6400AAKS RAID0

    LG Blu-ray/HD-DVD Combo || Dell 3008WFP || Corsair HX1000W || Rocketfish
    Air Cooled with ThermalRight Ultra 120 Extreme

  7. #132
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    1. Do you want to edit your videos locally on the server and use it as an active rig?
    If not, you should go for a much, much smaller cpu. You can save a lot of power by buying smart.

    2. What is the notebook ram for?

    3. try and reduce all the un-needed parts. (less things to break down the better) Thing about going IGP. You don't need 3D.

    4. If you wanna make backup off all your stuff that will get expensive. You could go for Raid1. That means you mirror your entire array 1 time. That also means 2x the drives. Or even Raid51 that would mean 2 more drives wasted on to of that. Additionally you can do off-site backup. On Hdds that are not plugged in all the time, tapes and stuff like that. Also, backup on the array on the same hardware controller as my main array is also not something that sounds very safe to me. :/

    5. I see you would like to go for 12 drive controller. I must stress this for you again, if you want to add drives to your array later that will take a LONG time and will get slower the next time the more TBs you have. One way around this is to have a complete backup of your data. That way you just delete your array and make a new bigger one. Or just buy all the drives now and do a full system replace once you fill this one up.

    6. Hardware raid is definetly the way to go. You have better chance of moving the array to a different mobo, etc. I went for 3ware 9650SE 8LPML. So far it is pretty neat and straight forward. It was the only one available here and I wanted to buy locally so i have local and physical support.

    7. As far as the OS is concerned. Go with something you are familiar with and with something a LOT of people are well versed in dealing with. You don't want to run into a snag 1yr from now and nobody knows what the hell kind of issue you have.
    WHS would be a good choice but it uses its own version of software JBOD with shadow copies and therefore doesn't support GPT drives (2TB+) or RAID disks in general. It can be tricked in using all that but I wouldn't risk my data depending on some hack making my OS working like it wasn't meant to.
    All linuxes are not supported good by various controllers and as far as I saw they often require some driver tweaking to work. That is also something you would want.
    Regular Win is probably the safest route to go. And for a simple server I don't know what would be so bad about them not to take advantage of a huge user base and experienced users.

    I will go for 8 1TB disks in Raid5, one 500GB OS drive and 1TB disk for backing up "important data".
    The backup hdd I will leave connected in the drive cage but I will make a switch on the back of the case so I can Turn it on (DIY hot swap )
    The server will be used for storing HD movies and other stuff like that, as well as pics, software files, small local website and torrent client.
    While all that stuff will consume a lot of data 90&#37; can be retrieved from various sources. That other 10% will be backed up on that drive I mentioned.
    The drives in RAID5 array will probably be in sleep mode when not used to cut down on power usage and drive wear. But we'll se if slow availability will be an issue or not.

    That is "all" I can think of now...
    Oh, and don't rush it. Build a decent thought through system know so you don't cry latter. That's why mine is taking so long and I'm down to my last GBs in the house

    Regards!

  8. #133
    Xtreme Member
    Join Date
    Jul 2007
    Location
    now
    Posts
    242
    Quote Originally Posted by XS Janus View Post
    1. Do you want to edit your videos locally on the server and use it as an active rig?
    no, the rig in my sig is my editing machine

    Quote Originally Posted by XS Janus View Post
    2. What is the notebook ram for?
    that's to expand the cache memory on the RAID card to 2GB (from 256MB default)

    Quote Originally Posted by XS Janus View Post
    3. try and reduce all the un-needed parts. (less things to break down the better) Thing about going IGP. You don't need 3D.
    yeah, the only reason i'd get a graphics card is if the mobo doesn't have IGP. that would be my preference too, but i haven't seen a mobo i like with IGP (of course I was only looking at ICH9R at the time)

    Quote Originally Posted by XS Janus View Post
    4. If you wanna make backup off all your stuff that will get expensive. You could go for Raid1.
    I think that's too expensive for this, and the real key for me is having the copy offsite... ie not a copy in the same machine like RAID1.

    Quote Originally Posted by XS Janus View Post
    Also, backup on the array on the same hardware controller as my main array is also not something that sounds very safe to me. :/
    Quote Originally Posted by XS Janus View Post
    I will go for 8 1TB disks in Raid5, one 500GB OS drive and 1TB disk for backing up "important data"...
    interesting. why would a single drive that isn't part of the array be unsafe? i suppose I could use the mobo SATA for this particular drive. Is that what you're doing with your 1TB backup disk?

    Quote Originally Posted by XS Janus View Post
    5. I see you would like to go for 12 drive controller. I must stress this for you again, if you want to add drives to your array later that will take a LONG time ...
    Even if I only get 4 drives right now? And I'm not set on that 1231ML - it's just the one I listed. It seemed like the best one for growth. I may not need it now, but after I'm at, say, 11TB in 3 or 4 years, it might have been a good call. who knows, but I'm open to alternatives.


    Quote Originally Posted by XS Janus View Post
    7. As far as the OS is concerned....
    I'm with you on WHS... seems like it's close, but just not quite right. the thing with 2k3 or 2k8 server is the cost. linux is way cheaper... and it "seems" safer - eg from virus attack



    I've been planning this build for months. I think I need to just start buying stuff. My plans evolve too frequently to nail anything down. I started building the rig in my sig back in Nov 2007. I've swapped vga twice, mobo once, and all those spare parts I listed are things I've also replaced LOL


    Thanks again!
    ------media machine------
    Q6600 L737B || EVGA 780i || 8GB G.Skill DDR2-1000 PQ || 2 x EVGA 9800 GX2
    150GB Raptor (Vista x64 SP1) || 3 x 640GB WD6400AAKS RAID0

    LG Blu-ray/HD-DVD Combo || Dell 3008WFP || Corsair HX1000W || Rocketfish
    Air Cooled with ThermalRight Ultra 120 Extreme

  9. #134
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    I meant "unsafe" more as in "unwise". But if you put just one drive to serve as a backup, a failing controller won't mess up data like if you had a small backup array on the same controller, you are right about that.
    But also using up Raid ports on your expensive controller for drives not in any array just sounds wasteful to me.

    You are right for my backup hdd I will use mobo ports, as well as for system drive.

    I'm using Giga G33M-DS2R motherboard for couple of reasons.
    Good underclocking options, IGP, support, availability, solid caps and Pci-e 4x.

    As for array growth/migrating time I dont think it matters how much you have used up at all rather how much it has to recalculate all over again.
    I set this process in motion by accident and noticed it is slow. Then googled up and found some poor guy like us who was crying about it. He also called 3ware and they said that's normal- 3MB/s LINK
    I will try and confirm this later, but I can tell you I let the process go on for 1.5hrs while doing research and it came to 1%

    The Areca you mentioned should be faster than my 3ware, but even if it was twice as fast I dont see migration is any solution if you don't have a complete backup of everything or keeping the server online is very important.

    Also the thread limitation will definetly be the bottleneck for any RAID5 so getting a faster card for a single user at one time loading the server also doesn't make much sense.

    I'll post my screen shots tomorrow and you will see how things stand here.
    All in all I'm pretty satisfied with my initial findings even though I will have to invest in 4 more drives sooner than I expected
    But that will give me the chance to do a full battery of test to see how 8drives setups compare to 4 drive Raid5 and 0 setups

  10. #135
    Xtreme Member
    Join Date
    May 2008
    Posts
    462
    As for card I have an Adaptec 5405 and LOVE it. In a recent Maximum PC review it beat the crap out of every card in every category (RAID 5 and RAID 0) My card only has one multilane connecter for only 4 drives but there are version that will support up to 12 SATA drives.

    This card was about $350 and beat cards costing over $1,000 by ALOT in the Maximum PC review.

    Also, are you looking for hot-swappable drives?

  11. #136
    Xtreme Member
    Join Date
    Jul 2007
    Location
    now
    Posts
    242
    Quote Originally Posted by crazy1323 View Post
    Also, are you looking for hot-swappable drives?
    yes, for my backup drives (which may not be on the RAID array)
    for the RAID storage, it would be nice, but not a requirement
    ------media machine------
    Q6600 L737B || EVGA 780i || 8GB G.Skill DDR2-1000 PQ || 2 x EVGA 9800 GX2
    150GB Raptor (Vista x64 SP1) || 3 x 640GB WD6400AAKS RAID0

    LG Blu-ray/HD-DVD Combo || Dell 3008WFP || Corsair HX1000W || Rocketfish
    Air Cooled with ThermalRight Ultra 120 Extreme

  12. #137
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    So finally here are some first results I managed to record.

    RAID5 using 4xWD1000FYPS drives

    HD Tach: Each setup run several times and using different stripe sizes, as indicated on the pics themselves.



    HD Tune: Each setup run several times and using different stripe sizes, as indicated on the pics themselves.



    ATTO Benchmark: This test is one of the very few that can do synthetic WRITE tests, so just had to run it. Because it does write test aswell I run the benchmark wit write-back cache enabled and disabled so you guys can see the synthetic difference as well
    *I left the benchmark "stock" and ran it. If someone knows a better way of setting it up for home file/media server tests let me know and I'll do it over.
    Each setup run several times and using different stripe sizes, as indicated on the pics themselves.
    Write-back cache DISABLED



    Write-back cache ENABLED



    This next tests are a real life file copy tests that I did by locally shuffling a TestFolder1 made of single 2GB or (more often) a 4GB+ video file and a sepperate test copying a TestFolder2 made of 1206MB of big, small and very small system files I took from C:\ all averaging in ~163KB per file when divided up.
    The folders were copied from a Raptor 150GB resulting in a 83.5MB write cap in some cases.
    Read tests I haven't done but most would be caped by a single Raptor anyway.
    Read/write tests were performed by copying each of the test folders one by one from array onto the array itself.
    Tests were also performed with and without write-back cache to express its impact.
    Results are an average of several runs.


    ALL test were done on 4xWD1000FYPS drive setups.
    The ATTO bench pic with write-back cache is labeled wrong, also its results look a bit fishy on 16k stripe. I will redo that test tomorow and replace with correct label.

  13. #138
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    I did the Raid0 test just to see how it goes
    Here are the recorded results the same way I did RAID5
    Any objections please inform me to correct them.


    RAID0 using 4xWD1000FYPS drives

    HD Tach: Each setup run several times and using different stripe sizes, as indicated on the pics themselves.



    HD Tune: Each setup run several times and using different stripe sizes, as indicated on the pics themselves.



    ATTO Benchmark: This test is one of the very few that can do synthetic WRITE tests, so just had to run it. Because it does write test aswell I run the benchmark wit write-back cache enabled and disabled so you guys can see the synthetic difference as well
    *I left the benchmark "stock" and ran it. If someone knows a better way of setting it up for home file/media server tests let me know and I'll do it over.
    Each setup run several times and using different stripe sizes, as indicated on the pics themselves.
    Write-back cache DISABLED



    Write-back cache ENABLED



    This next tests are a real life file copy tests that I did by locally shuffling a TestFolder1 made of single 2GB or (more often) a 4GB+ video file and a separate test copying a TestFolder2 made of 1206MB of big, small and very small system files I took from C:\ all averaging in ~163KB per file when divided up.
    The folders were copied from a Raptor 150GB resulting in a 83.5MB write cap in some cases.
    Read tests I haven't done but most would be caped by a single Raptor anyway.
    Read/write tests were performed by copying each of the test folders one by one from array onto the array itself.
    Tests were also performed with and without write-back cache to express its impact.
    Results are an average of several runs.


    ALL test were done on 4xWD1000FYPS drive setups.
    Please feel free to analyse my findings and add your suggestions and observations.

  14. #139
    Xtreme Enthusiast
    Join Date
    Apr 2005
    Location
    Toronto, ON
    Posts
    517
    Tip if you guys are running Vista:

    DISABLE MMCSS

    Start -> Run -> Services.msc -> Multimedia Class Scheduler -> Disable

    I was trying to figure out why my network transfers were terribly slow -- 55 MB/sec write, 15 MB/sec read. This is over a gigabit network with a test Windows 2008 Server w/2x1 TB in RAID0 and a Samsung 1 TB HDD on my Vista X64 workstation. I read about how playing music causes MMCSS to throttle network file transfer speeds and realized this is my problem.

    Now I no longer need to close Steam to get 99.93% utilization of a 1 Gbps connection.
    i7 3770k - p8z77-v pro - 4x4gb - gtx680 - vertex 4 256gb - ax750
    i5 3570k - z77-pro3 - 2x4gb - arc-1231ml - 12x2tb wdgp r6 - cx400
    heatware

  15. #140
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    Thanks!

    Here is it explained in wiki: http://en.wikipedia.org/wiki/Multime...eduler_Service

    The question is would it do more harm than good to disable it in a situation where2 clients are watching movies and listening to music from a fileserver that has this service disabled?

  16. #141
    Xtreme Enthusiast
    Join Date
    Apr 2005
    Location
    Toronto, ON
    Posts
    517
    Quick update of my own tests:

    4 x WD 1 TB, RAID5 - 16 K stripes

    E6300, 4 GB RAM
    P35-DS3R - ICH9R
    Windows Server 2008 Enterprise
    Intel MSM 8

    Write speeds: 30-40 MB/s
    Read speeds: 80-100 MB/s (capped by gigabit network)

    Write-back cache enabled, as well as "advanced performance".

    Ick!

    Gotta run now, but this makes me pine for a real raid controller!!
    i7 3770k - p8z77-v pro - 4x4gb - gtx680 - vertex 4 256gb - ax750
    i5 3570k - z77-pro3 - 2x4gb - arc-1231ml - 12x2tb wdgp r6 - cx400
    heatware

  17. #142
    Xtreme Enthusiast
    Join Date
    Apr 2005
    Location
    Toronto, ON
    Posts
    517
    I THINK I FIGURED IT OUT

    On my P35-DS3R.. look at where the Gigabit connection is attached.



    I came upon this realization after running ATTO locally on the server computer. Then when I was searching for other people's RAID5 ICH9R results, I found someone else who indicated they had the same issue... the person could not get their results to be consistent when run locally and over the network. Then I realized maybe it has something to do with how the NIC is attached.

    Now time to check out my other motherboards.

    EDIT:
    Looks like unless I get a gigabit nic and toss it in the PCI-E 16X slot I'll have the same problem everywhere I go:

    Last edited by zoob; 05-19-2008 at 09:24 AM.
    i7 3770k - p8z77-v pro - 4x4gb - gtx680 - vertex 4 256gb - ax750
    i5 3570k - z77-pro3 - 2x4gb - arc-1231ml - 12x2tb wdgp r6 - cx400
    heatware

  18. #143
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    How exactly did you measure your read and write speeds over the network? Are those real life speeds you are getting?
    Do you mean you can download a large file over your network with 80-100MiB/s to your client or did you just do some synthetic test?

    I thought using windows you were pretty much capped to ~50MiB/s per client (thread) connected to the server due to SMB protocol?

  19. #144
    Xtreme Enthusiast
    Join Date
    Apr 2005
    Location
    Toronto, ON
    Posts
    517
    XS Janus,

    WS2008 - E6300/4GB/P35-DS3R, 4xWD 1TB, Intel Matrix Storage 8.0 drivers
    Vista X64 SP1 - X3350/4GB/P5E-VM HDMI, 2xSamsung 1TB (non-raid), MMCSS disabled
    Connected by GigaE

    I test network throughput by transferring an 8 GB DVD image

    ICH9R RAID0
    - 100 MB/sec read/writes through the network. No 50MiB/sec cap.

    ICH9R RAID5 - 4x1 TB, 16 K stripes, write cache enabled, advanced performance enabled
    - ATTO on WS2008 - anything over 8 MB I get >100 MB/sec read/writes
    - From my Vista X64 workstation: 30-40 MB/sec writes (Vista X64 -> WS2008) and 80-100 MB/sec reads (WS2008 -> Vista X64)
    - From my WS2008 server: 100 MB/sec writes (WS2008 -> Vista X64) and 100 MB/sec reads (Vista X64 -> WS2008)

    That sort of blows my theory of the Gigabit NIC connection point out of the water.

    Then I stumble upon this: http://blogs.technet.com/markrussino...4/2826167.aspx
    The one case where the SP1 file copy engine doesn't use caching is for remote file copies
    Perhaps that is what I'm experiencing. Vista X64 SP1 not caching but when I do the copy on the WS2008 machine it's caching.

    This is so confusing!!!!!!! :|
    i7 3770k - p8z77-v pro - 4x4gb - gtx680 - vertex 4 256gb - ax750
    i5 3570k - z77-pro3 - 2x4gb - arc-1231ml - 12x2tb wdgp r6 - cx400
    heatware

  20. #145
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    Clearly something is off here. You wrote:
    "From my Vista X64 workstation: 30-40 MB/sec writes (Vista X64 -> WS2008) and 80-100 MB/sec reads (WS2008 -> Vista X64)"

    Does "From my Vista X64 workstation" mean: file copy procedure initiated with my Vista X64 workstation to my WS2008 server?
    If yes, than that means you get 30-40MiB/s reads from Vista machine and 30-40MiB/s writes on the server when you do it like that --meaning you have a bottleneck when initiating file copies with your Vista x64 client to your server.
    Is this correct?

    If yes, then my understanding of the quoted paragraph is that when you initiate file copies in Vista SP1; since it does not know where and on what setup the files being copied will end up, the OS doesn't initiate caching, thus resulting in slower performance of files being copied form SP1 clients to WS2008 server.

    Obviously, when say that when you initiate image file copy procedure with your WS2008 machine you get up to 100MiB/s in all instances, (writing to and reading from your Vista x64 machine), Windows Server 2008 doesn't have that sort of "brake" enabled.
    The reason could be so that WS2008 servers systems could communicate faster between them while each client using regular Win is capped at some point an thus cannot bog down the server by it self.

    It would be interesting if you could install another WS2008 OS and do the same test but this time initiate copying from Ws2008 workstation->WS2008 Raid5 and see if the cap is gone.

    It is pretty confusing...

  21. #146
    Xtreme Enthusiast
    Join Date
    Apr 2005
    Location
    Toronto, ON
    Posts
    517
    Yes, the file copy was initiated on my Vista X64 workstation with Vista as the source and WS2008 as the destination.

    I'll see how much time I have to setup an additional PC running WS2008 and attempt a WS2008->WS2008 RAID5 copy.
    i7 3770k - p8z77-v pro - 4x4gb - gtx680 - vertex 4 256gb - ax750
    i5 3570k - z77-pro3 - 2x4gb - arc-1231ml - 12x2tb wdgp r6 - cx400
    heatware

  22. #147
    Registered User
    Join Date
    May 2008
    Posts
    3
    XS Janus : have you tried the 3ware card on the x16 pci-e slot ?

    I am considering almost the same setup as yours ( G33M-DS2R, 3ware PCI-E ),but am wondering if 3ware can work on the x16 slot.

    If you have information or you have done some tests,please post your experience with the x16 slot.

    Thanks

  23. #148
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Hey Janus,

    I understand your main concern was to build a low-power fileserver, right? Because you've gone straight for the slowest possible combination... the 1TB GP WDs are about the slowest drives out there (but you save a few W per disk) as is the 3ware 9650. I've got one here too, performance-wise it just plain sucks.

    If you wanted to go the performance route I'd have recommended the new Adaptec 5805 + 4x Samsung F1 1TB disks. They need about 3W more per disk than the WD WGP drives, but they are twice as fast. Same goes for the controller. With that combination, you're looking at 250-300MB/s rather than 55 in Raid5...
    I just hope you won't be limited by you raid5's performance.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  24. #149
    Registered User
    Join Date
    May 2008
    Posts
    3
    jcool : based on what you're concluding the 3ware is the slowest of them all ?

    And why adaptec - why not Areca for example ?

    Since you mentioned you have other than 3ware card - can please perform some test according to Janus' test so others can see real results ? Not only synthetic tests ( for example Areca's cards "exploit" very well ATTO's algorithm so they always have extreme results in ATTO benchmarks ).

    Also about the 3ware's performance - are you sure 8dics setup can make that big difference in performance ? I.e. arent the hard drives the limiting factor in that case rather the controller(s) ?

    Also please not what Janus has written about the "55MB/s factor" :
    copying each of the test folders one by one from array onto the array itself.
    Sequential write/read are always faster than real life use,so comparing ATTO's results to "array to array" copy isn't a good comparison.

    I am not trying to argue or something,just trying to see some results,since i am considering getting a RAID5 setup and i am still wondering which controller with what kind of 1TB drives.

  25. #150
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Simple, because the Adaptec 5X05 series is the only one yet to feature the latest Intel IOP, being a 1,2Ghz Dualcore chip (!) it pretty much beats every other controller card when it comes to parity raid. I own a 5405 but unfortunately haven't done raid 5 benches yet.

    I just did some benches with my single 15K SAS drive and two Samsung F1 1TB in Raid 0 on the Adaptec.
    You can find them if you klick on the controller in my sig.

    The 9650 I have is the 2 port variant, but since that one's already terribly slow even in raid 0 (way slower than the Intel onboard raids) I don't think the 8port will be much better. I also heard lots of other people complaining about the 9650 series performance.

    You know that the controller and not the drives are capping performance by looking at the HDtune or HDTach curves, for example.
    Begin and end transfer rates are virtually the same, if it were the drives' limit the transfer rate in the beginning would be a lot higher than in the end.

    For example, this is my single Savvio 15k.1 @ Adaptec:



    You can clearly see the difference between the start (110MB) and final (80MB) transfer rate.
    You can also see why it is important to use a good controller, the following two are still my single SAS drive with ATTO bench, 1st made with a Promise TX2650 (cheap crappy controller) and the 2nd made with the 5405.




    See any difference? (ignore the peaks, that's the controller's cache and muscle flexing)
    Slow controllers usually don't show when it comes to reads, but rather when it comes to writes.
    Last edited by jcool; 05-29-2008 at 11:40 AM.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


Page 6 of 8 FirstFirst ... 345678 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •