Results 1 to 23 of 23

Thread: Home NAS/server questions

  1. #1
    L-l-look at you, hacker.
    Join Date
    Jun 2007
    Location
    Perth, Western Australia
    Posts
    4,644

    Home NAS/server questions

    Hi folks, long time no post but good to see XS still kicking along. Not sure if this belongs more in Server or Storage, but I'm guessing Server, so here goes:

    Looking to redo our home network and centralise things a bit without hopefully sacrificing too much in the way of performance and usability. Goals for this build are
    • serve as a central and convenient point of storage for everything from movies to wedding photos to files for my wife's Masters thesis, so need both space and a fairly high level of redundancy and data protection. Truly important commercial (tax) stuff is backed up offsite, so what is left, while valuable, isn't justified off-siting (plus there's a lot of stuff, currently sitting at about 4TB)
    • be relatively futureproof, providing at least 12TB of total space, whilst running relatively quietly
    • play nicely with both Windows and OSX
    • as I'm a bit concerned about the recent surveillance/data mining revelations brought to light (yeah yeah tinfoil etc), I'd like to retain control and ownership of all the stuff we currently have sitting in various cloud services (Dropbox etc), so needs to run some sort of personal cloud service (BitTorrent Sync, OwnCloud, etc - open to suggestions here)
    • be fast, or at least as fast as reasonably possible, because if it doesn't relatively seamlessly replace storing stuff on local HDDs I know my wife won't use it which defeats the purpose
    • hence perform well over our LAN (support teaming, jumbo frames, etc)
    • not die (server etc) and not require fiddling to keep up. Particularly as we want to use it as a cloud replacement; if my wife's at Uni and can't access her Masters stuff like she currently can with her Dropbox account, she won't use it again which again defeats the purpose. I'll also be somewhat unimpressed if I can't access notes and files from work. Hence, uptime, reliability, etc (another reason why we need data protection and redundancy).

    I've done my own digging and contemplating, and have worked out what I think will do a reasonably good job hardware-wise. Keep in mind that I'm in Australia, so availability of parts locally is somewhat restricted compared to you folks in the USA.
    • CPU: some manner of low-TDP i5 part, currently looking at a 4570S
    • CPU cooling: Noctua NF-12P. Overkill, but I have one spare from a prior build. May even be able to run fanless (that'd be nice).
    • Mobo: Probably an MSI Z87I mITX board, but really anything in the form-factor with dual NICs that support teaming will do
    • RAM: 2x4GB DDR3-1600 of some manner (OR - contemplating 2x8GB if I do end up running ZFS, see below)
    • OS HDD: 240GB Intel 730 SSD. Way too much capacity for intended usage, but this is the smallest size I can find one in (AND - wondering whether I can partition it and use some for ZIL/L2ARC if I end up running ZFS, see below)
    • RAID: LSI MegaRaid 9361-8i
    • RAID HDDs: 6x WD 3TB 'Red' drives in RAID-6
    • Case: BitFenix Phenom mITX. Will replace fans with Noctua variants, as much happier with their noise profile.

    The RAID controller is already purchased (and was the subject of some debate, see below). The rest of the hardware is open to any alternate suggestions.

    All this is missing is an OS (and any suggestions/feedback). I am currently leaning towards running some flavour of Debian. I am, however, by no means an expert in *nix, with experience being limited to running Debian rather than MacOS on some inherited hardware back in the PowerPC days. I like to think that I'm reasonably capable of working around difficulties as they arise, however, so despite being much more comfortable in Windows environments I'm reasonably sure some manner of *nix distro will serve the intended purpose better than running Server 2012 or something (I'd also really like to not have to shell out more cash just for a software license).

    I am undecided on some specifics, though. Firstly, I'm reading a lot about ZFS. Filesystems are not my forte, but there is a good deal of positive buzz about it, both from a performance (good) and data security (doubleplus good) perspective. On the other hand, I am also hearing from parties reporting slower performance than running EXT3 or 4, that it can be hugely complex to maintain, and that it has enormous hardware requirements to produce good performance (a friend tells me not to even consider it unless I have >24GB of RAM, which seems a rather high requirement). I have done as much digging as I can on this issue, but haven't really found a clear (and recent - a lot of stuff written on the topic is years old) cost/benefit summary vs just running EXT4.

    On a related note, I am also unsure whether I should be looking at Debian-Linux, or Debian-BSD. I know nothing about BSD, but again, my Google-Fu tells me that it is supposedly not quite as well-supported as the Linux kernel and can be slow to receive updates - however supposedly its native version of ZFS is more reliable and generally better than ZFS-on-Linux (which is, again according to sources some years out of date, supposedly much slower). I'd like to maintain at least some degree of familiarity with the OS running on this build, but if running Debian-Linux will measurably impact performance, uptime or access to updates then I'd consider it worth the time to learn my way around BSD.

    Finally, I'm wondering if anyone has any input on use of a HW RAID controller vs software RAID (either using mdam or ZFS). Back when I last had any exposure to this stuff HW raid was miles faster (particularly in relation to parity RAID, which I am proposing to use), however when I posed these questions to my closer circle of friends opinion was split on whether to use software RAID instead as it removed the RAID controller as a potential single-point-of-failure and supposedly was just as fast. My Googling turns up a few threads on the issue, and from what I'm seeing performance on a HW RAID controller is still faster, up to about 35% when running mixed read/write operations. As above, I already own the controller I'm proposing to use (had to order the thing from the US and use a shipping forwarding service, there's basically nothing available locally that isn't already crammed into a server enclosure), so this is more from academic interest than whittling down parts list. To my mind, though, the risk of the RAID controller dying seems vanishingly small (all the ones I've had experience with have delivered years of continuous uptime with nary a hitch), and probably significantly smaller than that of the hosting MB dying. Still, if SW raid is as fast/faster (ZFS RAID management in particular is supposedly very good indeed) then I'd be silly to be using the card just for the sake of it. Interested in opinions

    Thanks for any input! Expect a build log when all is settled
    Last edited by SoulsCollective; 04-24-2014 at 09:31 PM.
    Rig specs
    CPU: i7 5960X Mobo: Asus X99 Deluxe RAM: 4x4GB G.Skill DDR4-2400 CAS-15 VGA: 2x eVGA GTX680 Superclock PSU: Corsair AX1200

    Foundational Falsehoods of Creationism



  2. #2
    Xtreme Member
    Join Date
    Feb 2007
    Location
    South Texas
    Posts
    359
    for BSD type stuff look at nas4free and freenas they usually have plugins or modules that might fit your needs. I once tried nas4free, but I need to be able to run Boinc so that was kinda moot.


    I've been using Windows Server 2008r2 for some time now and have not had any issues with it even though I've changed hardware alot. With just 1 intel nic I get over 110 mb/s writes across a Cat6 gigabit network setup. Thats even with running Boinc in the back ground and functioning as a HTPC till my Z-Box gets back from RMA.

    My Server:
    Supermicro H8SGL
    AMD Opteron 6128
    8gb Corsair XMS3 DDR3-1333
    HighPoint RocketRaid 2720
    Raid 5:4x WD 750GB Green
    Windows Server 2008 R2 Standard
    Fractal Define R4
    Last edited by Hawkeye4077; 04-24-2014 at 04:24 AM.
    ASRock X399 Fatal1ty
    1950x Threadripper
    32gb DDR4
    GTX 1070
    __________________________________________________ ____

  3. #3
    L-l-look at you, hacker.
    Join Date
    Jun 2007
    Location
    Perth, Western Australia
    Posts
    4,644
    Thanks for the input - I did look at OpenNAS etc, however my question re BSD was more on a cost/benefit front of using Debian-BSD or Debian-Linux, given that from my reading
    • ZFS on BSD is more up-to-date, being a native implementation, than ZFS-on-Linux
    • BSD generally offers fairly good device support, but less so than Linux
    • Debian-BSD is supposedly slower than Debian-Linux, apparently suffering most in the areas of data throughput


    On a related note, I have learnt that the Intel 530 series I was originally proposing to use does not have power loss data protection (ie supercap or similar to ensure writes are committed on power loss), despite the 320-series it replaces having such. Have therefore specced up to a 730, which is a pain as the lowest capacity I can find them available in is 240GB. Still, I can use it for ZIL/L2ARC if I end up running ZFS.
    Rig specs
    CPU: i7 5960X Mobo: Asus X99 Deluxe RAM: 4x4GB G.Skill DDR4-2400 CAS-15 VGA: 2x eVGA GTX680 Superclock PSU: Corsair AX1200

    Foundational Falsehoods of Creationism



  4. #4
    Xtreme Member
    Join Date
    Feb 2007
    Location
    South Texas
    Posts
    359
    For BSD hardware Support look here:
    http://www.freebsd.org/releases/10.0R/hardware.html
    http://www.freebsd.org/relnotes/CURRENT/hardware/

    I don't know if either of the two free nas distro's have transitioned to 10/11 yet, but my 2720 had a love/hate relationship with 9.3 when i tried it last
    ASRock X399 Fatal1ty
    1950x Threadripper
    32gb DDR4
    GTX 1070
    __________________________________________________ ____

  5. #5
    I am Xtreme
    Join Date
    Jan 2006
    Location
    Australia! :)
    Posts
    6,096
    Julian! Long time no hear or see!

    re: ZFS needing heaps of RAM... bulldust! You only need a truckload of RAM if you enable dedup (this is where ZFS/OS checks if you have double+ copies of files). If disabled, I've heard ppl run it with as little as 2GB & I myself have played a little with FreeNAS / NAS4Free(?) with..crap.. either 2 or 4GB or there abouts
    DNA = Design Not Accident
    DNA = Darwin Not Accurate

    heatware / ebay
    HARDWARE I only own Xeons, Extreme Editions & Lian Li's
    https://prism-break.org/

  6. #6
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Argentina
    Posts
    412
    Quote Originally Posted by SoulsCollective View Post
    Thanks for the input - I did look at OpenNAS etc, however my question re BSD was more on a cost/benefit front of using Debian-BSD or Debian-Linux, given that from my reading
    • ZFS on BSD is more up-to-date, being a native implementation, than ZFS-on-Linux
    • BSD generally offers fairly good device support, but less so than Linux
    • Debian-BSD is supposedly slower than Debian-Linux, apparently suffering most in the areas of data throughput


    On a related note, I have learnt that the Intel 530 series I was originally proposing to use does not have power loss data protection (ie supercap or similar to ensure writes are committed on power loss), despite the 320-series it replaces having such. Have therefore specced up to a 730, which is a pain as the lowest capacity I can find them available in is 240GB. Still, I can use it for ZIL/L2ARC if I end up running ZFS.
    I don't know if it meets all your needs, but FreeNAS is an excellent option. I have a modest FreeNAS server, with 4x1TB WD Blue HDDs, 8GB RAM and a Pentium G2030. I have a CIFS share, with most of the space used for it, and a 500GB Datastore for my Virtual Machines (Media Center, VoIP, VPN, AD, Mail, SQL, and 6 virtualized ESXi).

    Performance wise, it's excellent. On Windows I have ~900Mbps bandwidth, with 1 gigabit uplink. My 2 ESXi servers are connected with 2x1gbps uplinks with iSCSI, and read/write performance is up to 200MB/s.

    But, if you seek performance, you should use at least 8GB RAM, and I would recommend 16GB or more if possible. SSD for ZIL and L2ARC is useless unless you know what you are doing. First of all, for a ZIL log you'd need 2 SSD in RAID1, and only the sync writes will see a boost in performance. CIFS, AFP or iSCSI (Windows client) won't see any performance benefit 'cause they don't write in sync.

    L2ARC does not needs to be RAIDed, you could use a single SSD for that. It acts as a read cache, and it'll be recreated each time the NAS boots. It'll use it only when you run out of RAM.

    Instead of throwing SSDs for more performance, which won't happen because of the way it works (again, unless you have a lot of sync writes and/or you work with more data that could fit in your RAM) I would buy 32GB ECC RAM

    Edit: oh, also, for OS just use an 8GB USB pendrive, if you go with FreeNAS or something like that.
    Last edited by Andi64; 05-06-2014 at 10:29 PM.
    Main: Windows 10 Core i7 5820K @ 4500Mhz, Corsair H100i, 32GB DDR4-2800, eVGA GTX980 Ti, Kingston SSDNow 240GB, Crucial C300 64GB Cache + WD 1.5TB Green, Asus X99-A/USB3.1
    ESXi Server 6.5 Xeon E5 2670, 64GB DDR3-1600, 1TB, Intel DX79SR, 4xIntel 1Gbps
    ESXi Server 6.0 Xeon E5 2650L v3, 64GB DDR4-2400, 1TB, Asrock X99 Xtreme4, 4xIntel 1Gbps
    FreeNAS 9.10 x64 Xeon X3430 , 32GB DDR3-1600, 3x(3x1TB) WD Blue, Intel S3420GPRX, 4xIntel 1Gbps

  7. #7
    Xtreme Addict
    Join Date
    Jan 2007
    Location
    Michigan
    Posts
    1,785
    Quote Originally Posted by SoulsCollective View Post
    Hi folks, long time no post but good to see XS still kicking along. Not sure if this belongs more in Server or Storage, but I'm guessing Server, so here goes:

    Looking to redo our home network and centralise things a bit without hopefully sacrificing too much in the way of performance and usability. Goals for this build are

    Thanks for any input! Expect a build log when all is settled
    I use an Intel NUC and a PROBOX 4HDD/SDD USB3 eSATA drive enclosure connected via USB 3. No RAID, just straight with cloud backup - if a drive fails I buy a new on and re-download the contents from the cloud backup service. It's fast, low power, low cost, active cooling for HDD life, works well as my server.

    EDIT: I've been down the full server path over the last twelve years or so, and honestly the evolution to my setup above has been the best experience so far. I've got kids and a wife, along with a farm complete with farm animals to look after so I can't be bothered with babysitting my server very often... I don't worry about having the correct modules, drivers, RAID cards, software RAID, backup scripts, fan failure (although the two fans (one in the NUC, one in the enclosure) could fail). One thing to mention is I am fortunate to not have bandwidth caps enforced, and a 5mb/50mb connection, so the cloud backup coupled with the simplified setup above works well for me. Transfer speeds are around the 45-60MB/sec with initial bursts of ~100MB/s. Total storage I have is about ~6TB across five cloud protected drives. It's file level backup, but I do have versioning so I can go back about a month if I notice a corrupt file in time... Power wise this setup pulls about 35w at the wall on the kilowatt. I also have a sync data directory that I sync and share with Google Drive so I can share files, and have remote access via Open VPN. I think the NUC+Enclosure and mSATA SSD for the NUC + RAM and HDDs cost me around $600. I use crashplan for backup at about $70 per year. So not free, but cheaper for what it does... Anyway, many things to consider :-)
    Last edited by Vinas; 05-08-2014 at 05:33 AM.
    Current: AMD Threadripper 1950X @ 4.2GHz / EK Supremacy/ 360 EK Rad, EK-DBAY D5 PWM, 32GB G.Skill 3000MHz DDR4, AMD Vega 64 Wave, Samsung nVME SSDs
    Prior Build: Core i7 7700K @ 4.9GHz / Apogee XT/120.2 Magicool rad, 16GB G.Skill 3000MHz DDR4, AMD Saphire rx580 8GB, Samsung 850 Pro SSD

    Intel 4.5GHz LinX Stable Club

    Crunch with us, the XS WCG team

  8. #8
    L-l-look at you, hacker.
    Join Date
    Jun 2007
    Location
    Perth, Western Australia
    Posts
    4,644
    Thanks for the additional input folks

    Update time: server is built, and has been named Sugarcube by my wife. Sugarcube consists of:
    • CPU: i5 4570S
    • CPU cooling: Noctua NF-U12, currently running fanless and sitting at about 30C idle, 45C load
    • Mobo: MSI Z87I mITX board. As above, this is literally the only mITX board I can find that supports teaming, so no ECC support.
    • RAM: 2x8GB Corsair DDR3-1600
    • OS HDD: 240GB Intel 730 SSD. Way too much capacity for intended usage, but this was the smallest size I could find an SSD in that has some manner of power-loss data protection
    • RAID: LSI MegaRaid 9361-8i with BBU
    • RAID HDDs: 6x WD 3TB 'Red' drives
    • Case: BitFenix Phenom mITX. Fans replaced with Noctua NF-P12s (front fan will be a BitFenix 230mm Spectre, which is currently on order)


    I'm using Phoronix Test Suite to benchmark each of the storage configs considered, however I work full-time and each run takes about eight hours, so combined with me fiddling and trying to remember how to Linux progress is slow. OS currently installed is Debian stable (Wheezy 7.5) for all tests, so some more speed may be eked out with a newer kernel but the stability is more important to me (although the lack of kernel support for the Intel HD chip means Gnome is currently running in fallback mode...yay).

    Reference run (on the Intel 730 SDD): http://openbenchmarking.org/result/1...BY-SSDTEST9384

    Test using the LSI controller to do all the work (RAID-6), formatted as EXT-4: http://openbenchmarking.org/result/1...BY-LSIR6EXT455

    Additional results to follow, first running ZFS on top of a R6 array managed by the LSI card, then ZFS running on the raw drives passed through the card, then ZFS running on each drive configured through the controller as a separate RAID-1 array (which should allow the controller's cache and write optimisations to come into play).
    Rig specs
    CPU: i7 5960X Mobo: Asus X99 Deluxe RAM: 4x4GB G.Skill DDR4-2400 CAS-15 VGA: 2x eVGA GTX680 Superclock PSU: Corsair AX1200

    Foundational Falsehoods of Creationism



  9. #9
    L-l-look at you, hacker.
    Join Date
    Jun 2007
    Location
    Perth, Western Australia
    Posts
    4,644
    Again using RAID-6 powered by the LSI controller, but running a ZFS filesystem on top: http://openbenchmarking.org/result/1...BY-TEST7477399
    No tweaks or alterations have been made for this run, no L2ARC, ZIL, etc - just the same logical drive as in the above test with ZFS on top. Much worse than EXT4, even though it should be benefitting from the same cache optimsations as the EXT4 test, however none of the advantages of ZFS are in play hre.
    Rig specs
    CPU: i7 5960X Mobo: Asus X99 Deluxe RAM: 4x4GB G.Skill DDR4-2400 CAS-15 VGA: 2x eVGA GTX680 Superclock PSU: Corsair AX1200

    Foundational Falsehoods of Creationism



  10. #10
    I am Xtreme
    Join Date
    Jan 2006
    Location
    Australia! :)
    Posts
    6,096
    Quote Originally Posted by SoulsCollective View Post
    Again using RAID-6 powered by the LSI controller, but running a ZFS filesystem on top: http://openbenchmarking.org/result/1...BY-TEST7477399
    No tweaks or alterations have been made for this run, no L2ARC, ZIL, etc - just the same logical drive as in the above test with ZFS on top. Much worse than EXT4, even though it should be benefitting from the same cache optimsations as the EXT4 test, however none of the advantages of ZFS are in play hre.
    highly likely cause running a RAID array with ZFS on top is virtually a no-no / highly unrecommended!
    DNA = Design Not Accident
    DNA = Darwin Not Accurate

    heatware / ebay
    HARDWARE I only own Xeons, Extreme Editions & Lian Li's
    https://prism-break.org/

  11. #11
    Registered User
    Join Date
    Oct 2006
    Location
    Upper Midwest, USA
    Posts
    93
    Just wanted to give an encouraging word that I at least am following this to see where you end up.

    I'm about to start my own journey down this path and I'm debating the merits of sticking with a HW RAID card over ZFS software RAID. I'm also enticed by ZFS but leery of discarding a already in hand HW card. You're build is very similar to the one I'm contemplating with the exception of ECC ram and my plan to use an old Intel X25 I have.
    HARDWARE
    CPU: Intel i7-6700K - Delided and repaste w/Liquid Metal
    RAM: 4x4GB Kingston CL15 3000
    MB: Gigabyte Z170x-UD5-TH
    GPU: *Waiting for Vega*
    AUDIO: M-Audio Firewire Audiophile
    HD:
    *waiting for NVME drive*
    1x512Gb Samsung 850 Pro
    2x150Gb Raptors RAID 0
    PSU:
    EVGA 850 P2
    Meanwell 320w-12(Running 13.4v for pumps)
    MNT:Acer 2423W & HP ZR24W

    COOLING

    CPU: Swiftech Apogee GTZ
    GPU: *SOME WATERBLOCK FOR VEGA*
    RAD: BlackIce III Extreme
    PMP: 2xLang D5 in series
    TUB: 1/2" ID (3/4" OD) Tygon Silver Antimicrobial
    FANs:
    3xDelta EFB1212SHE TriBlade
    2xSharktoon SK-2000-80
    2xCM Excalibur
    1xDelta GFB1212VHW

  12. #12
    L-l-look at you, hacker.
    Join Date
    Jun 2007
    Location
    Perth, Western Australia
    Posts
    4,644
    Currently running PTS on the HW RAID with ZFS on top using a 64GB L2ARC, so should get results in about eight hours or so. In the meantime, to respond to some of the input:

    Quote Originally Posted by Andi64 View Post
    But, if you seek performance, you should use at least 8GB RAM, and I would recommend 16GB or more if possible.
    Current build is using 16GB, which is basically the maximum capacity I can give it given that mITX boards only have two DIMM slots - well, that, and 16GB density modules are practically non-existent.
    SSD for ZIL and L2ARC is useless unless you know what you are doing. First of all, for a ZIL log you'd need 2 SSD in RAID1, and only the sync writes will see a boost in performance. CIFS, AFP or iSCSI (Windows client) won't see any performance benefit 'cause they don't write in sync.
    I understand that the performance benefits will only be seen in specific write scenarios, but given that I have a massive amount of unused SSD space, even allowing for some heavy over-provisioning, and ZIL will only benefit from ~2-3GB space, I don't really see a reason not to As for the RAID concern, I am given to understand that this was only a necessity in earlier versions of ZFS where ZIL failure would hose a pool. This issue has been corrected in more recent builds.
    Instead of throwing SSDs for more performance, which won't happen because of the way it works (again, unless you have a lot of sync writes and/or you work with more data that could fit in your RAM) I would buy 32GB ECC RAM
    Sadly, as above, there is no ECC option in the mITX form factor, otherwise I definitely would!
    Quote Originally Posted by tiro_uspsss View Post
    highly likely cause running a RAID array with ZFS on top is virtually a no-no / highly unrecommended!
    From my understanding, it's really more a case of that the full featureset of ZFS doesn't really come into play when not running raw drives, or preferably using RAID-Z parity implementations. But even with ZFS running on top of a logical drive, features like snapshots, export/importability, etc are still useful - the question is whether the performance cost/benefit equation works out favourably
    Quote Originally Posted by Baenwort View Post
    Just wanted to give an encouraging word that I at least am following this to see where you end up.

    I'm about to start my own journey down this path and I'm debating the merits of sticking with a HW RAID card over ZFS software RAID. I'm also enticed by ZFS but leery of discarding a already in hand HW card. You're build is very similar to the one I'm contemplating with the exception of ECC ram and my plan to use an old Intel X25 I have.
    Heh, thanks - there's a real dearth of actual numbers-based analysis, isn't there! As always, it seems the best solution is just to muck in and do it yourself :P
    Rig specs
    CPU: i7 5960X Mobo: Asus X99 Deluxe RAM: 4x4GB G.Skill DDR4-2400 CAS-15 VGA: 2x eVGA GTX680 Superclock PSU: Corsair AX1200

    Foundational Falsehoods of Creationism



  13. #13
    L-l-look at you, hacker.
    Join Date
    Jun 2007
    Location
    Perth, Western Australia
    Posts
    4,644
    And here we have the same setup as before, just with a 64GB partition of the SSD set aside as L2ARC - http://openbenchmarking.org/result/1...BY-ZFSW64GL253.

    Given that performance has dropped below the margin of error, I suspect something has not quite gone right, so take this above result with a grain of salt. Taking the system down for reboot and reinstalling PTS.
    Rig specs
    CPU: i7 5960X Mobo: Asus X99 Deluxe RAM: 4x4GB G.Skill DDR4-2400 CAS-15 VGA: 2x eVGA GTX680 Superclock PSU: Corsair AX1200

    Foundational Falsehoods of Creationism



  14. #14
    Registered User
    Join Date
    Oct 2006
    Location
    Upper Midwest, USA
    Posts
    93
    Quote Originally Posted by SoulsCollective View Post
    Sadly, as above, there is no ECC option in the mITX form factor, otherwise I definitely would!
    There are two options that I know about:

    1) ASRock Rack If you have a Haswell i3 (this is what I'm planning on building around) which is available in the US at Newegg for $190. http://www.newegg.com/Product/Produc...82E16813157467

    2) Supermicro If you don't mind the embeded Ivy Bridge i7-3612QE. I don't know about availability of this one as it wasn't on my radar due to the soldered on CPU.

    I've read rumors of a Asus board that is capable of it but just rumors so far.
    HARDWARE
    CPU: Intel i7-6700K - Delided and repaste w/Liquid Metal
    RAM: 4x4GB Kingston CL15 3000
    MB: Gigabyte Z170x-UD5-TH
    GPU: *Waiting for Vega*
    AUDIO: M-Audio Firewire Audiophile
    HD:
    *waiting for NVME drive*
    1x512Gb Samsung 850 Pro
    2x150Gb Raptors RAID 0
    PSU:
    EVGA 850 P2
    Meanwell 320w-12(Running 13.4v for pumps)
    MNT:Acer 2423W & HP ZR24W

    COOLING

    CPU: Swiftech Apogee GTZ
    GPU: *SOME WATERBLOCK FOR VEGA*
    RAD: BlackIce III Extreme
    PMP: 2xLang D5 in series
    TUB: 1/2" ID (3/4" OD) Tygon Silver Antimicrobial
    FANs:
    3xDelta EFB1212SHE TriBlade
    2xSharktoon SK-2000-80
    2xCM Excalibur
    1xDelta GFB1212VHW

  15. #15
    Crunching For The Points! NKrader's Avatar
    Join Date
    Dec 2005
    Location
    Renton WA, USA
    Posts
    2,891
    Quote Originally Posted by Baenwort View Post
    There are two options that I know about:

    1) ASRock Rack If you have a Haswell i3 (this is what I'm planning on building around) which is available in the US at Newegg for $190. http://www.newegg.com/Product/Produc...82E16813157467

    2) Supermicro If you don't mind the embeded Ivy Bridge i7-3612QE. I don't know about availability of this one as it wasn't on my radar due to the soldered on CPU.

    I've read rumors of a Asus board that is capable of it but just rumors so far.
    or any of these
    http://www.newegg.com/Product/Produc...286&IsNodeId=1


    on a side note, HOLYCRAP
    Last edited by NKrader; 05-13-2014 at 07:15 PM.

  16. #16
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,568
    Yeah they have a nice storage server based on the ASRock board.

    http://www.tweaktown.com/reviews/631...iew/index.html

  17. #17
    L-l-look at you, hacker.
    Join Date
    Jun 2007
    Location
    Perth, Western Australia
    Posts
    4,644
    Thanks for the input folks, however -
    Quote Originally Posted by SoulsCollective View Post
    Keep in mind that I'm in Australia, so availability of parts locally is somewhat restricted compared to you folks in the USA.
    Plus, server is already built using the above specs
    Rig specs
    CPU: i7 5960X Mobo: Asus X99 Deluxe RAM: 4x4GB G.Skill DDR4-2400 CAS-15 VGA: 2x eVGA GTX680 Superclock PSU: Corsair AX1200

    Foundational Falsehoods of Creationism



  18. #18
    L-l-look at you, hacker.
    Join Date
    Jun 2007
    Location
    Perth, Western Australia
    Posts
    4,644
    Second test with above setup using 64GB partition for L2ARC: http://openbenchmarking.org/result/1...BY-ZFSW64GL215

    So, not much improvement here. Time to test running ZFS on drives passed through the LSI controller.
    Rig specs
    CPU: i7 5960X Mobo: Asus X99 Deluxe RAM: 4x4GB G.Skill DDR4-2400 CAS-15 VGA: 2x eVGA GTX680 Superclock PSU: Corsair AX1200

    Foundational Falsehoods of Creationism



  19. #19
    L-l-look at you, hacker.
    Join Date
    Jun 2007
    Location
    Perth, Western Australia
    Posts
    4,644
    And - a RAID-Z2 of six logical volumes each consisting of one drive in RAID-0 managed by the LSI controller (so taking advantage of the controller cache and other optimisations) - http://openbenchmarking.org/result/1...PL-RAIDZ2ONL38

    Results are interesting. The 64GB L2ARC is obviously helping benchmarks here, but generally speeds seem lower across the board from the HW controller managing a straight RAID6 formatted as EXT4. I'm planning on tidying up these resuls slightly and putting up a comparison soon; probably tomorrow.
    Rig specs
    CPU: i7 5960X Mobo: Asus X99 Deluxe RAM: 4x4GB G.Skill DDR4-2400 CAS-15 VGA: 2x eVGA GTX680 Superclock PSU: Corsair AX1200

    Foundational Falsehoods of Creationism



  20. #20
    L-l-look at you, hacker.
    Join Date
    Jun 2007
    Location
    Perth, Western Australia
    Posts
    4,644
    Right, data:
    Test % change - RAID-Z2 vs RAID-6 RAID-Z2 on 6xRAID-0 ZFS on RAID-6 64GB L2ARC ZFS on RAID-6 EXT-4 on RAID-6
    Flex IO (low good) -97.15% 41.58 28.37 31.99 1457.38
    FS-Mark (high good) -56.83% 262.43 207.6 217.13 607.87
    Dbench 1 (high good) -32.66% 623.22 530.08 537.46 925.54
    Dbench 12 (high good) -35.94% 2085.61 314.68 459.17 3255.55
    Dbench 48 (high good) -22.05% 2377.04 252.26 333.37 3049.62
    Dbench 128 (high good) -15.10% 2388.28 465.3 487.44 2812.92
    IOzone read (high good) 0.38% 7509.26 1099.61 1236.56 7480.87
    IOzone write (high good) -22.85% 148.66 101.51 187.57 192.69
    Thread I/O read (high good) 3.25% 13289.23 2475.33 2528.86 12870.94
    Thread I/O write (high good) -22.85% 148.66 101.51 187.57 192.69
    Compile init compile (high good) -14.35% 499.84 791.88 702.57 583.61
    Compile create (high good) -9.23% 285.6 123.27 126.01 314.65
    Unpack kernel (low good) 0.69% 10.16 10.74 10.74 10.09
    Postmark (high good) -0.72% 5357 5842.5 5557 5396
    Gzip compress (low good) 0.07% 13.53 13.64 13.54 13.52
    PostgreSQL (high good) -39.31% 5172.22 4460.68 4604.84 8521.75

    First things first, we can probably discard kernel unpacking and gzip compression - results basically do not change across configuration, so I suspect the limiting factor here is CPU speed and not storage. Secondly, I have no idea what is going on with the Flex I/O test such that any ZFS tested config performs so enormously faster than EXT4 (remember low numbers are good here), other than that perhaps the entire dataset is fitting in ARC. Worth noting, but I'm not entirely sure how well this will match up to real-world performance.

    IO-Zone read, Threaded I/O read and Postmark are all within the margin of error of the tests, so performance here is basically equivalent. As expected, read performance is not a huge issue for ZFS, particularly when data can fit wholly in one of the ARCs.

    Operations that involve some manner of write access or mixed read/writes are more challenging. On average, RAID-Z2 performs 30.24% slower than RAID-6 on identical hardware and OS. Closest is CompileBench initial creation, worst is FS-Mark.
    Last edited by SoulsCollective; 05-16-2014 at 07:59 PM.
    Rig specs
    CPU: i7 5960X Mobo: Asus X99 Deluxe RAM: 4x4GB G.Skill DDR4-2400 CAS-15 VGA: 2x eVGA GTX680 Superclock PSU: Corsair AX1200

    Foundational Falsehoods of Creationism



  21. #21
    Xtreme Member
    Join Date
    Feb 2007
    Location
    South Texas
    Posts
    359
    This is a really nice looking server board.

    ASRock X399 Fatal1ty
    1950x Threadripper
    32gb DDR4
    GTX 1070
    __________________________________________________ ____

  22. #22
    Xtreme Member
    Join Date
    Sep 2007
    Posts
    480
    This article popped up on Ars a couple days ago. It basically discusses the basic setup and comparison of FreeNas and Nas4free. I know you're not able to, but I've always read to use ECC ram when using ZFS, but I'm not at all an expert on these things so I wouldn't know.

  23. #23
    Xtreme Member
    Join Date
    Feb 2007
    Location
    South Texas
    Posts
    359
    I have been testing OpenMediaVault on my system for about a month now. I now use a Dell PowerEdge T20 server, (Dell's answer to HP Microservers), and after a little bit of work, managed to get all my required stuff running on it. I use the SnapRaid plugin to determine Parity across the drives, with a pooling setup using aufs, it only spins up the drive that has the media on it that I need at the time. It's plugin system is not all that extensive, but the distro itself is put together from start with small NAS/Media Server in mind. Current stable version is 0.5.x, but .6 (1.0) is just around the corner, and will be based off Debian Wheezy for greater hardware compatability.
    ASRock X399 Fatal1ty
    1950x Threadripper
    32gb DDR4
    GTX 1070
    __________________________________________________ ____

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •