MMM
Page 3 of 3 FirstFirst 123
Results 51 to 71 of 71

Thread: LSI RAID Controller help, 2308 chip on-board ASRock Extreme 11 motherboard is slow!

  1. #51
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626
    Mobile, I flashed the IT firmware without a BIOS and it worked great. Fast boot times now and fast drives!

    Any of you guys mess around with RAMDISKS at all? Wonder if it would be playing around with putting games on a RAMDRIVE or if it would not be that much of in improvement over SSD's.
    GPU: 4-Way SLI GTX Titan's (1202 MHz Core / 3724 MHz Mem) with EK water blocks and back-plates
    CPU: 3960X - 5.2 GHz with Koolance 380i water block
    MB: ASUS Rampage IV Extreme with EK full board water block
    RAM: 16 GB 2400 MHz Team Group with Bitspower water blocks
    DISPLAY: 3x 120Hz Portrait Perfect Motion Clarity 2D Lightboost Surround
    SOUND: Asus Xonar Essence -One- USB DAC/AMP
    PSU: EVGA SuperNOVA NEX1500
    SSD: Raid 0 - Samsung 840 Pro's
    BUILD THREAD: http://hardforum.com/showthread.php?t=1751610

  2. #52
    AikenDrum
    Guest
    Do they still make zero-channel raid controller cards? The ones that direct traffic, but don't actually own the drives/ports. I don't know if that would even be a good idea for this, let alone feasible, but I'm curious.

    The reason why I'm curious is that there is (or was at the time) so little on a ZCR that the profile is incredibly low and narrow and you might very well be able to fit it on one of the little 1x slots between the 16x slots that we want to put four graphics cards in.

    What I don't know/remember about ZCR, since I never actually used one and I'm really quite ignorant about it, is whether you need motherboard support for it, or what sort of performance it can actually give. I don't think it needs to handle the actual data being passed around--it just needs to direct it. If it needs to pass the data through its own piddly little connection, then the 1x speed (and 2.0 as I recall) is not gonna cut it.

    Just thinking out loud. If I'm being a moron, please disregard.
    Last edited by AikenDrum; 08-07-2012 at 07:29 PM.

  3. #53
    Xtreme Member
    Join Date
    Nov 2011
    Posts
    124
    The LSI HBA's have no cache, but are fully fledges SAS2/SATA3 controllers
    As we've proven up further is that HBA's run best when in IT mode where the drives are just presented to the OS.
    Letting the OS do any RAID if needed gives best performance.
    ASUS P8Z77 WS
    Intel Core i5-3470T
    16GB 1333Mhz RAM
    PNY GeForce GTX470
    LSI 9266-8i CacheCade pro v2.0/fastpath
    IBM ServeRAID M5016
    IBM ServeRAID M1015 LSI SAS Controller (IR mode)
    4x 60GB OCZ Solid 3 SSDs
    6x Hitachi 2TB 7k2000 HDs

  4. #54
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    well here is my write-up, even though i try not to link my own drivel

    9207-8i
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  5. #55
    Xtreme Member
    Join Date
    Nov 2011
    Posts
    124
    Nice writeup Paul, nothing wrong with blowing ones own trumpet

    One thing though, is the SAS2308 a dual core or a faster clocked SAS2008 (single core) ?
    LSI mentions nothing of dual core, some other places do mention dual core.
    I'm leaning towards a faster SAS2008, just sngle core.
    As the performance increase looks to be the 533Mhz to 800Mhz increase and PCIe Gen2 to Gen3 latency decrease
    Last edited by mobilenvidia; 08-08-2012 at 09:23 PM.
    ASUS P8Z77 WS
    Intel Core i5-3470T
    16GB 1333Mhz RAM
    PNY GeForce GTX470
    LSI 9266-8i CacheCade pro v2.0/fastpath
    IBM ServeRAID M5016
    IBM ServeRAID M1015 LSI SAS Controller (IR mode)
    4x 60GB OCZ Solid 3 SSDs
    6x Hitachi 2TB 7k2000 HDs

  6. #56
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626
    Quote Originally Posted by Computurd View Post
    well here is my write-up, even though i try not to link my own drivel

    9207-8i
    Nice. I would have like to have seen an AS-SSD screenshot of the 8x C400 setup to compare against my setup but no biggie. Really, all of these HBA's need to be run in IT mode eh to get the best performance.
    GPU: 4-Way SLI GTX Titan's (1202 MHz Core / 3724 MHz Mem) with EK water blocks and back-plates
    CPU: 3960X - 5.2 GHz with Koolance 380i water block
    MB: ASUS Rampage IV Extreme with EK full board water block
    RAM: 16 GB 2400 MHz Team Group with Bitspower water blocks
    DISPLAY: 3x 120Hz Portrait Perfect Motion Clarity 2D Lightboost Surround
    SOUND: Asus Xonar Essence -One- USB DAC/AMP
    PSU: EVGA SuperNOVA NEX1500
    SSD: Raid 0 - Samsung 840 Pro's
    BUILD THREAD: http://hardforum.com/showthread.php?t=1751610

  7. #57
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    Can't get my 9207-8i retail to work properly with EVGA 1155 FTW Is it impossible for EVGA to get sascontrollers to work with they're motherboards?

  8. #58
    Registered User
    Join Date
    Apr 2010
    Posts
    6
    Quote Originally Posted by Nizzen View Post
    Can't get my 9207-8i retail to work properly with EVGA 1155 FTW Is it impossible for EVGA to get sascontrollers to work with they're motherboards?
    My Adaptec 6805e worked flawlessly on Evga Classy X79, so they can do it.

  9. #59
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252
    how do you do that?
    asign them as workers/manager...

    please
    thanks

    henrik
    Quote Originally Posted by Computurd View Post

    for even higher speeds...do not RAID them with winders RAID. it is no good for scaling. configure them as separate volumes, then assign either a worker, or manager, to each. Then you will see top speed. no raid!

    they have actually refined the 2308 and FW to where they can pull 700K IOPS now, i reached around 633K and my hardware was the limitation.
    Henrik
    A Dane Down Under

    Current systems:
    EVGA Classified SR-2
    Lian Li PC-V2120 Black, Antec 1200 PSU,
    2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
    (48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580

    ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600

    Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case

    Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H

  10. #60
    Xtreme Enthusiast
    Join Date
    Dec 2008
    Posts
    560
    I've messed around the with ramdisk a bit. this loads at bootup 22GB , with my games, users folders and apps. to keep everything smooth.
    I disabled write caching here. thought why bother. lowered as-ssd scores but , meh its fast. This is just ram running a basic 1212Mhz. quad channel might kick major ass.
    Using Primo Ramdisk Ultimate
    http://i49.tinypic.com/11gm51g.png
    (couldn't find how to shirk the image for here.
    MM Duality eZ modded horizon (microres bracket). AMD 8120 4545Mhz 303x15 HTT 2727 1.512v load. 2121Mhz 1.08v idle. (48hour prime95 8k-32768 28GB ram) 32GB GeIL Cosra @ RAM 1212Mhz 8-8-8. 4870x2 800/900 load 200/200 idle. Intel Nic. Sabertooth 990fx . 4x64GB Crucial M4 raid 0 . 128GB Samsung 840 pro. 128GB OCZ Vertex 450. 6x250GB Seagate 7200.10 raid 0 (7+ years still running strong) esata raid across two 4 bay sans digital. Coolit Boreas Water Chiller. CoolerMaster V1000. 3x140MM back. 1x120MMx38MM back. 2x120MMx38MM Front. 6x120MM front. 2x120MM side. silverstone fan filters. 2x120MMx38MM over ram/PWM/VRM , games steam desura origin. 2x2TB WD passport USB 3.0 ($39 hot deal score) 55inch samsung 1080p tv @ 3 feet. $30 month equal payments no int (post xmas deal 2013)

  11. #61
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    LSI 9207-8i HBA
    7x Plextor M3p 128gb + 1x intel 520 128gb ( waiting for last m3p :p). Software raid-0

    Asus x79 WS 2011, server 2008r2



    7x m3p software r0: Copy bech


    Atto 7x m3p + 1x520

    So close

    Edit:
    QD=10 dit it

    And there is no cache what so ever
    Last edited by Nizzen; 08-14-2012 at 02:33 PM.

  12. #62
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626
    I wonder why the Seq write speed is so low? Does the M3 Pro's not like AS-SSD's incompressible write test?
    GPU: 4-Way SLI GTX Titan's (1202 MHz Core / 3724 MHz Mem) with EK water blocks and back-plates
    CPU: 3960X - 5.2 GHz with Koolance 380i water block
    MB: ASUS Rampage IV Extreme with EK full board water block
    RAM: 16 GB 2400 MHz Team Group with Bitspower water blocks
    DISPLAY: 3x 120Hz Portrait Perfect Motion Clarity 2D Lightboost Surround
    SOUND: Asus Xonar Essence -One- USB DAC/AMP
    PSU: EVGA SuperNOVA NEX1500
    SSD: Raid 0 - Samsung 840 Pro's
    BUILD THREAD: http://hardforum.com/showthread.php?t=1751610

  13. #63
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    Quote Originally Posted by Callsign_Vega View Post
    I wonder why the Seq write speed is so low? Does the M3 Pro's not like AS-SSD's incompressible write test?
    M3p does like incomppressible write, so I think it is lack of Areca firmware

    I use the "it" june firmware. Have not flashed to the july firmware yet.

  14. #64
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    Fort Rucker, Alabama
    Posts
    626
    I have 64 GB of 2400MHz G.Skill arriving tomorrow. Should running pretty fast in quad-channel RAM-Disk.

    Got my Plextor M3Pro OS drive working properly. I used the latest 11.5 and it said "this driver isn't guaranteed to work blah blah" and I forced it to install. Now the numbers are great:



    No one should be using the C600 X79 Intel drivers those things are horrible.


    Even though the controller is unrelated, broke a 4k score on my 8x Vertex 4 array:



    Games on that array should load pretty quick onto the RAMDisk.
    GPU: 4-Way SLI GTX Titan's (1202 MHz Core / 3724 MHz Mem) with EK water blocks and back-plates
    CPU: 3960X - 5.2 GHz with Koolance 380i water block
    MB: ASUS Rampage IV Extreme with EK full board water block
    RAM: 16 GB 2400 MHz Team Group with Bitspower water blocks
    DISPLAY: 3x 120Hz Portrait Perfect Motion Clarity 2D Lightboost Surround
    SOUND: Asus Xonar Essence -One- USB DAC/AMP
    PSU: EVGA SuperNOVA NEX1500
    SSD: Raid 0 - Samsung 840 Pro's
    BUILD THREAD: http://hardforum.com/showthread.php?t=1751610

  15. #65
    Registered User
    Join Date
    Aug 2012
    Posts
    70
    Quote Originally Posted by Callsign_Vega View Post
    Mobile, I flashed the IT firmware without a BIOS and it worked great. Fast boot times now and fast drives!
    Good to read you found the culprit.

    Any of you guys mess around with RAMDISKS at all? Wonder if it would be playing around with putting games on a RAMDRIVE or if it would not be that much of in improvement over SSD's.
    No. Not since the days of MS-DOS (Remember? Last millenium).
    With smaller SSD RAIDs the RAMDrive would show some perf benefits. If you take multiple LSI controllers in concert, the RAM Drive develops a "weakness" vs. the SSDs. Utilizing the RAMDrive at higher speeds, the mem bus will be "loaded" twice when transfering data from the ram drive into main memory. This leaves less bandwidth for your application.
    Another thing with RAM disk is a higher CPU load vs. a good IO system. Usually the ramdisk copy is done via the CPU and not the DMA subsystem.

    I am running the LSI controllers at 14 GB/sec on a stock X79 mobo, leaving still 16 GB/sec of remaining mem bandwidth with 1333 DDR Ram (24 GB/sec with 1600 DDR3). A ramdisk would need 28 GB/sec for similar performance (seen from the app). Leaving only 2 GB/sec respectively 10 GB/sec for the 3930K CPU. Pretty unbalanced system.
    IO transfer at this speed needs less than 5% CPU time on a stock i7-3930K CPU. Haven't checked it, but I assume a RAMdisk at this speed will consume 1-2 cores fully.

    Quote Originally Posted by Callsign_Vega View Post
    I have 64 GB of 2400MHz G.Skill arriving tomorrow. Should running pretty fast in quad-channel RAM-Disk.
    Rather make sure that the RAMDisk Software drives the RAM chips effectively (for real world performance).
    Easy test:
    Run the stream triad benchmark on your idle system.
    install ramdisk sw, start a benchmark with max io bandwidth (i.e. iometer) - keep the io benchmark running.
    start the stream triad benchmark again (while iom is running).
    What are the new numbers for IOM and stream?

    No one should be using the C600 X79 Intel drivers those things are horrible.
    They are fine, as long as the DMI 2.0 link to the CPU isn't oversubscribed. It's only 2 GB/s peak (1,6 GB/sec practical)
    On your ASRock MB a fully loaded IO section ( 8xUSB3.0, 14xUSB2.0, 1xFW, 2xeSATA, 2xGbLAN, 2x6GbSATA, 4x3GbSATA) would provide/consume approx. 78 GBit/sec to be delivered over a 20 GBit/sec DMI backbone - it's 1:4 oversubscribed. Seen the other way around - 4 x USB 3.0 connected SSDs would saturate the link, leaving limited bandwidth for the SATA ports.

    Even though the controller is unrelated, broke a 4k score on my 8x Vertex 4 array:
    Games on that array should load pretty quick onto the RAMDisk.
    Nice number.

    BTW, the Vertex 4 need regular TRIM support to maintain their fresh performance. As soon as you leave performance mode and enter storage mode - write perf drops ca. 75%. Can't be done via RAID/HBA adapters these days (except Intel's RST). Need to be done on one of the X79 SATA ports. No big issue, just something to consider.


    Have fun with your system,
    Andy

  16. #66
    Registered User
    Join Date
    Jan 2013
    Posts
    8
    Hi guys,

    Sorry to post in a fairly old thread but most of my issues seem to have a lot of info here.

    I recently came back from a holiday in Hong Kong, and having an Extreme11 and being quite under budget I decided to grab some SuperSSpeed S301 64GB drives to test on the LSI controller.

    My problems will be easier to see with some screens:



    So the issue seems to be somewhere with my LSI controller because the Intel MSAHCI 4k-64Thrd numbers are decent.

    Here is an ATTO benchmark that similarly show the issue:



    I don't know what I could possibly be missing or if the differentiation between Intel and LSI ports should be that much.

    That's the biggest problem I'm having.

    Quote Originally Posted by Callsign_Vega View Post
    Mobile, I flashed the IT firmware without a BIOS and it worked great. Fast boot times now and fast drives!
    I've also flashed to the IT firmware without flashing the BIOS but my boot times still increased by 20 seconds.

    Quote Originally Posted by mobilenvidia View Post
    If you are going to run the SAS2308 in IT mode, flash the Firmware but don't flash the BIOS.
    This will speed up boot time no end as it won't load the BIOS as there won't be any.
    The drives will still show to the OS (JBOD) as normal.
    You just won't be able to boot to the LSI BIOS, which in IT mode there nothing to see or do anyway.
    To do this you will need to cleanflash the SAS2308 first then load only the Firmware.
    Is there an extra step required to cleanflash?

    The steps I followed were:

    sas2flsh -o -e 6
    sas2flsh -o -f <firmware>

    I have seen instructions that say to do this:

    megarec -writesbr 0 sbrempty.bin
    megarec -cleanflash 0
    <reboot, back to USB stick>
    sas2flsh -o -f <firmware> -b <bios>
    sas2flsh -o -sasadd 500605b0xxxxxxxx (x= numbers for SAS address)
    <reboot>

    But I'm not so sure that I should be rebooting during the flashing process.

    Any advice would be greatly appreciated.

    Thanks


    Miguel

  17. #67
    Xtreme Member
    Join Date
    Nov 2011
    Posts
    124
    Megarec will most likely not even detect the SAS2308 controller, it's for SAS2008/2108 controllers

    In IT or IR mode you also don't need to do anything with SBR as this is only for IBM M1015 (and other OEM versions) of the LSI9240

    sas2flsh -o -e 6
    sas2flsh -o -f <firmware>

    Should do it, BUT BECAREFUL !! don't reboot as you may end up with a paperweight.
    I would really avoid this, unless absolutely necessary or you have money to burn.

    You would be much better off just flashing the card with IT Firmware, just do it with MSM in windows or sas2flsh -o -f ,firmware> or megacli
    You only really need to clean flash when you go from IT to IR (or otherway)

    You never flash the BIOS in IT mode as the whole point of IT mode is not to have a BIOS
    Do how ever flash the UEFI BIOS/Driver, this allows quick boots for system with UEFI BIOS.
    Last edited by mobilenvidia; 01-22-2013 at 12:05 AM.
    ASUS P8Z77 WS
    Intel Core i5-3470T
    16GB 1333Mhz RAM
    PNY GeForce GTX470
    LSI 9266-8i CacheCade pro v2.0/fastpath
    IBM ServeRAID M5016
    IBM ServeRAID M1015 LSI SAS Controller (IR mode)
    4x 60GB OCZ Solid 3 SSDs
    6x Hitachi 2TB 7k2000 HDs

  18. #68
    Registered User
    Join Date
    Jan 2013
    Posts
    8
    Quote Originally Posted by mobilenvidia View Post
    Megarec will most likely not even detect the SAS2308 controller, it's for SAS2008/2108 controllers

    In IT or IR mode you also don't need to do anything with SBR as this is only for IBM M1015 (and other OEM versions) of the LSI9240

    sas2flsh -o -e 6
    sas2flsh -o -f <firmware>

    Should do it, BUT BECAREFUL !! don't reboot as you may end up with a paperweight.

    You would be much better off just flashing the card with IT Firmware, just do it with MSM in windows or sas2flsh -o -f ,firmware>
    You only really need to clean flash when you go from IT to IR (or otherway)

    You never flash the BIOS in IT mode as the whole point of IT mode is not to have a BIOS
    Do how ever flash the UEFI BIOS/Driver, this allows quick boots for system with UEFI BIOS.
    Alright that was what I did.

    Now I just need to figure out why the 4k-64Thrd writes are so slow.

  19. #69
    Xtreme Member
    Join Date
    Nov 2011
    Posts
    124
    Have you played with the Drive cache setting in Device Manager, Disk drives ?
    Reboot each time you adjust
    ASUS P8Z77 WS
    Intel Core i5-3470T
    16GB 1333Mhz RAM
    PNY GeForce GTX470
    LSI 9266-8i CacheCade pro v2.0/fastpath
    IBM ServeRAID M5016
    IBM ServeRAID M1015 LSI SAS Controller (IR mode)
    4x 60GB OCZ Solid 3 SSDs
    6x Hitachi 2TB 7k2000 HDs

  20. #70
    Registered User
    Join Date
    Jan 2013
    Posts
    8
    yep. i get a boost of 250-300MB/s when i tick turn off windows write-cache buffer flushing on the entire array so 800-900MB/s. Still a far cry from what it could be considering the Intel port had about 300MB/s 4K-64Thrd writes for a single drive. There is no option for removal policy though so not sure if that makes any difference?

  21. #71
    Registered User
    Join Date
    Jan 2013
    Posts
    8
    So I just went and did a whole lot of benchmarks for a review on my blog and ran into something interesting that could explain the low 4k reads on the LSI ports of this board. I'm unsure if its isolated to mine but this is something I was able to compare between the Intel ports and had interesting results:



    LSI


    Intel



    LSI


    Intel



    LSI


    Intel


    Anybody have any theories as to what could be causing that pretty substantial dip at the beginning of the graph?

Page 3 of 3 FirstFirst 123

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •