Page 9 of 10 FirstFirst ... 678910 LastLast
Results 201 to 225 of 231

Thread: LSI 9211-8i

  1. #201
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    fhj52, I agree with Anvil that a x25-V seems perfect for your use.
    It costs about $100, and when doing CPU testing and overclocking it will boot fast and deliver sufficient performance in windows for testing purposes. The only problem may be if you require test-suites of more than 25GB, then x25-M may be necessary.
    While testing you can disconnect your arrays and then just re-connect them afterwards.

  2. #202
    Registered User
    Join Date
    Dec 2009
    Posts
    17
    Quote Originally Posted by jcool View Post
    12 seconds sounds fine.. hmmm. I'm curently using a DFI mobo (Award Bios) but will be switching back to my E760 classified soon (Award as well, but a slow booter...)
    I really like the bootup post speed of the DFIs, can hardly read anything before gettng to the Windows Boot Logo
    Configuration would be 4x SSD in Raid 0 ofc, single volume.
    In case the edit is missed I'll respond directly.
    TO make sure I timed it(but only with second hand on clock). After timing it I have to say the 12 sec is a best time.
    On *my* setup, a 15 to 30 sec. wait is more typical and then there's another BIOS wait of ~10-15 secs while the system BIOS and booltloader spin their wheels. End result on this Supermicro H8DCi (dual opteron) with only four SAS drives, single port connected, in one LUN on the 9211 is 25 - 45 secs.
    ...and I am with you about the wait. It is irritating. This is the worst I have seen from LSI ... But I've been hard crashing the system and the mobo is a much older gen' so the Nehalem arch may fair better. Your DFI &|R eVga may be better too ...but based on conversations with LSI support about other things on the 9211, i'd rec' to not plan on it.
    ...

    Since we under a blanket of snow & ice, with more coming, I might take the chance to build the Nehalem ...hopefully all will improve.

  3. #203
    Registered User
    Join Date
    Dec 2009
    Posts
    17
    Quote Originally Posted by GullLars View Post
    fhj52, I agree with Anvil that a x25-V seems perfect for your use.
    It costs about $100, and when doing CPU testing and overclocking it will boot fast and deliver sufficient performance in windows for testing purposes. The only problem may be if you require test-suites of more than 25GB, then x25-M may be necessary.
    While testing you can disconnect your arrays and then just re-connect them afterwards.
    Thanks for the input. ...SSDSA2MP040G2R5 would benefit anyone crashing system(s) more or less on purpose. Data security is much improved w/o relying only upon snapshots and backups so that warm, fuzzy feeling is easier to get & maintain.
    I'm not a patient man but am patient enough to wait for SATA3|SAS2 interfaces and SLC to become the norm later this year and hopefully at at (much) less than 3-bucks/GB too.
    ithink the SAS2 drives can handle it in the meantime. Tests indicate the only huge diff, when using RAID of course, is the access time. Patience, again, is the key.
    Not to say that if I catch a Newegg shocker deal I won't impulse buy older tech anyway. Missed one yesterday that prbly would have worked ...

    But neither is going to solve the boot time issue cuz the problem does not appear to be the storage. Since it appears to be the adapter BIOS & system BIOS I am going to try to eliminate the only one of those that I can.

  4. #204
    Registered User
    Join Date
    Dec 2009
    Posts
    17
    I wonder how the 9260-4i and -8i do on boot time?
    ...
    ...anybody know?
    Last edited by fhj52; 01-30-2010 at 12:48 AM.

  5. #205
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    fhj52,

    Hard to tell but ~20seconds on my rig. (20sec from the adapter displays it's name etc.)

    I've got a PERC 6/i loading right before the 9260-8i so it's hard to tell when the 9260 starts and when the PERC stops.
    -
    Hardware:

  6. #206
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Seemed like the 9211-8i, the 9260-8i and the Areca 1231ML-2G all booted about the same time - no appreciable difference that I was aware of.
    All 3 were solid as far as keeping the array together - no lost arrays - I don't think I lost a single one - and that is saying something because I really stressed these cards.
    I use a separate standalone SSD on ICH when I'm OCing cpu or memory - any raid card is too long to wait when you are restarting all day long.

  7. #207
    Registered User
    Join Date
    Dec 2009
    Posts
    17
    Thanks, Anvil & SteveRo.
    The 320-2x & 2e (either) take 15-20 sec with several arrays and 3x as many disks. Seems like the 3041E-R, the previous gen HBA with IR, was about the same(15-20sec). ...
    :: this 9211 adapter is taking too long to initialize.
    ...And then there is that extra wait of 10+ seconds while the system BIOS spins ...

    I'll have to put it into the Nehalem system & see how that BIOS works but, based on what you guys have, I doubt it will fair any better.
    I was asking cuz this 9211 is supposed to have dual-port compatibility but LSI is now saying that it is not implemented in firmware(i.e., they lied). Single adapter-controller multipathing for dual-port redundancy( a sort-of "failover") is supposed to be on the 9260 too which if not also a lie might be what I need to get ... depends on the implementation.

    SteveRo, I might 'impulse buy' a SSD if I catch the right deal on a 60+GB which will R|W as fast as the SAS2-RAID, or nearly so cuz you are of course absolutely correct about using the SSD rather than a spinner when crashing the OS &|R system. It is like a dream come true, :geek:.

    Hitachi & Intel entered into an agreement (PDF doc) a couple years back to produce a SLC SSD w/ 6Gbps SAS interface and am waiting to see how bad ($) it is going to be. (Should be available within a cpl months or so. ) Prbly way too much since the target customers are data centers ... , but if it is 3-3.5-bucks/GB like the rest of them, no need to spend the same cash on less.
    I feel pretty strongly that SAS600 is the best way to go for SSD and am willing to wait ... it should be a terrific match for the 9211 or 9260, and future gen, capable of producing IOP rates never seen before in PC computing. ...really Xtreme.

  8. #208
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I stumbled over a quite interresting thing while working on a SSD benching project on the norwegian forum I, Anvil, and Nizzen use.

    The story is, i was claiming that IOPS=(1/Average accesstime (in seconds))*QD, and was challanged by Anders, so i decided to make proof for my statement. While doing so i stumbled across an interresting and very usefull thing. You will find attached couple of screenshots of an Excell spreadsheet showing my attempt at prooving the equation true.
    All numbers are from Anvils rigg, single to 4R0 x25-V from ICH10R.
    The config used is attached as "Random Read 0,5-64kb exp2 QD1-128 v4 zip" (this is the config used for the benchmark project), and a custom version to test the MAX IOPS is attached as "custom 512b&4kb ran&seq qd8,32,128 8w 4exp zip".

    As you can see, the deviance from the equation is less than 0,3% for 4KB, 16KB, and 64KB. The interresting part was 3R0 and 4R0 at 512B and higher QD. 28% deviation from the formula, and by looking at the IOPS numbers, it seems like an IOPS roof has been hit whitout increasing the average accesstime. This could be one of two bottlenecks, HBA or CPU.
    The same access specification (512B) was done with 8 workers at QD 8, 32, and 128 (1, 4, 16 each) to try to eliminate the CPU as a bottleneck, and at QD 128 with 8 workers the meassured IOPS was almost exactly the "predicted" IOPS from IOPS=(1/average accesstime)*QD found with 1 worker at the same QD.


    This means we can use this method to analyze setups for IOPS bottlenecks, and find out how much they limit. If you guys want, i can device a quick IOmeter config for this speciffic purpose. By running it (/them) you can find the max IOPS of your setup and see if you have a bottleneck, and what the unbottled max would be.
    IMO this would be a nice supplementation for analyzing the 9211.


    Thoughts anyone?

    EDIT: BTW, notice that MAX IOPS = 245 256 IOPS. That is just insane, and for less than $500. (OK, so 512B isn't really usefull IRL, but still)
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	16&64KB IOPS eq (1DivAvgAcc)MulQD proof tables.png 
Views:	1357 
Size:	73.8 KB 
ID:	100985   Click image for larger version. 

Name:	512B&4KB IOPS eq (1DivAvgAcc)MulQD proof tables.png 
Views:	1473 
Size:	142.3 KB 
ID:	100991  
    Attached Files Attached Files
    Last edited by GullLars; 02-06-2010 at 02:34 PM. Reason: Data error in table 512B

  9. #209
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    ^^ Nice work. Interesting the way that access times reduce/scale as the qd & number of drives in the array increase.

    I guess to find out access time you could use this: QD/IOPS*1000 = Access time

    Are those attachments supposed to be *.icf config files in the attachments? If so I can't get them to load in iometer.

    Is there anyway you could attached the excel sheet?

  10. #210
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    QD/IOPS*1000 = Access time won't work for those instances where you are CPU/HBA bottlenecked, for the rest it will be right within 1% margin of error. The instances where you hit the CPU/HBA max IOPS limit accesstime will be lower for the setups that have higher theoretical max IOPS.
    After talking with anvil, i will make a dedicated thread for testing this more in-depth.

    Here is the spreadsheet, the part you want is on page 3.
    Everything is in english (like the rest of the project), i did it that way to share with you guys on XS
    Attached Files Attached Files

  11. #211
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Hi GullLars, thanks for posting the spreadsheet, but unfortunately it appears to be corrupt. Could you please repost? Could you also please repost the io config files that Anvil used for the V drives? (Sorry to be a pain)

    My interest at the moment is performance at queue depth's between 1 & 4, but with an emphasis on queue depth's between 1 & 2. I might start a thread to look at this in more detail as it is not that well documented.

    For most non enterprise applications I think this is where performance really matters and it would be interesting to see which ssd provides the best bang per buck ratio within this zone.
    Last edited by Ao1; 02-07-2010 at 01:43 AM.

  12. #212
    Xtreme Member
    Join Date
    Mar 2005
    Location
    UK
    Posts
    402
    Somewhat following on from my discussions in the 9260 thread ..

    Does the 9211 have the same issues with AMD 790FX as the 9260 does ? seeing as it is seemly my last hope for a 6gbs dedicated controller with my amd build i figured i should at least ask..

    Also judgeing from what ive read here irrespective of amd or intel platform issues it would seem the 9211 would be more suited to SSD use ? (upto 3 SSDs raid 0 in my case) is this correct ?

  13. #213
    Xtreme Guru
    Join Date
    Jul 2004
    Location
    10009
    Posts
    3,628
    Quote Originally Posted by S1nn3r View Post
    Somewhat following on from my discussions in the 9260 thread ..

    Does the 9211 have the same issues with AMD 790FX as the 9260 does ? seeing as it is seemly my last hope for a 6gbs dedicated controller with my amd build i figured i should at least ask..

    Also judgeing from what ive read here irrespective of amd or intel platform issues it would seem the 9211 would be more suited to SSD use ? (upto 3 SSDs raid 0 in my case) is this correct ?
    take my 1st hand woes with 9260-4i and sb750. its crap regardless of 9260/9211. you're just asking for trouble.
    do you see any AM3 crap in my sig anymore?

    EXACTLY! It was either sell the lsi or keep the AMD chipset. giving up the LSI 92xx ssd raid was far too much of a loss than selling my amd cpus and boards.

  14. #214
    Registered User
    Join Date
    Jun 2008
    Posts
    36
    I know this is a OLD post... but

    I am running 4x vertex on the 10R intel default. My read speeds cap out around 700MB/s. I'm going to guess its the bandwith limit of the 10R.

    Ive done some looking around and had my eye on a perc6, but after finding the LSI 9211-8i I think it would fit my needs better?

    Any advice would be great. Don't want to waste my money =)
    CPU----------------------Xeon W3520 @ 4.2
    MOTHERBOARD---------Evga Classified 760
    MEMORY-----------------G.Skill (3 x 2GB) DDR3 2000mhz
    COOLING----------------Thermalright TRUE120
    VIDEO--------------------eVGA 285 GTX (720/1620/2780)
    MONITOR----------------BenQ FP241VW
    STORAGE----------------4x 32gb Super Talent Ultra Drive ME SSDs in Raid 0, 1x WD Black 1TB Storage
    POWER------------------Corsair-750TX

  15. #215
    Registered User
    Join Date
    Dec 2008
    Posts
    3
    Hi,

    Don't recommend it:
    http://www.servethehome.com/lsi-9211...sd-raid-0-bug/

    I also had one before reading that.
    Went with the 9260-8i > best card for this purpose IMO =)

  16. #216
    Registered User
    Join Date
    Jun 2008
    Posts
    36
    Quote Originally Posted by jpiszcz View Post
    Hi,

    Don't recommend it:
    http://www.servethehome.com/lsi-9211...sd-raid-0-bug/

    I also had one before reading that.
    Went with the 9260-8i > best card for this purpose IMO =)
    Doubt I will ever use more than 4x drives in a raid0 setup. That bug only effects speed if you use both ports in 1 array?
    CPU----------------------Xeon W3520 @ 4.2
    MOTHERBOARD---------Evga Classified 760
    MEMORY-----------------G.Skill (3 x 2GB) DDR3 2000mhz
    COOLING----------------Thermalright TRUE120
    VIDEO--------------------eVGA 285 GTX (720/1620/2780)
    MONITOR----------------BenQ FP241VW
    STORAGE----------------4x 32gb Super Talent Ultra Drive ME SSDs in Raid 0, 1x WD Black 1TB Storage
    POWER------------------Corsair-750TX

  17. #217
    Registered User
    Join Date
    Dec 2008
    Posts
    3
    Hi,

    There is a 9260-4i too.

  18. #218
    Registered User
    Join Date
    Oct 2006
    Location
    Kirghudu, Cowjackingstan
    Posts
    462
    Quote Originally Posted by trans am View Post
    take my 1st hand woes with 9260-4i and sb750. its crap regardless of 9260/9211. you're just asking for trouble.
    do you see any AM3 crap in my sig anymore?

    EXACTLY! It was either sell the lsi or keep the AMD chipset. giving up the LSI 92xx ssd raid was far too much of a loss than selling my amd cpus and boards.
    Seems, I'm not the only one who ran into AMD troubles with LSI, 9260 on my board does not perform very well.

    Sony KDL40 // ASRock P67 Extreme4 1.40 // Core i5 2500K //
    G.Skill Ripjaws 1600 4x2Gb // HD6950 2GB // Intel Gigabit CT PCIe //
    M-Audio Delta 2496 // Crucial-M4 128Gb // Hitachi 2TB // TRUE-120 //
    Antec Quattro 850W // Antec 1200 // Win7 64 bit

  19. #219
    Xtreme Guru
    Join Date
    Jul 2004
    Location
    10009
    Posts
    3,628
    you did pretty good actually. you got it installed and raid going. I would be happy. thats as good as it gets on amd. my amd experience was more about not getting the card recognized. you got it going so be happy. you are still ripping way faster than you should.

  20. #220
    Xtreme Enthusiast
    Join Date
    Feb 2010
    Posts
    701
    Most dynamic raids outperform hardware raid cards.

    This isn't data that "speaks for itself" as stated in that link. It was simply a ridiculous comparison.

    The 9211 is perfectly fine for what it is. It doesn't have a roc, this should be known going in. It can be pushed to perform quite well regardless of stripe size:
    slowpoke:
    mm ascension
    gigabyte x58a-ud7
    980x@4.4ghz (29x152) 1.392 vcore 24/7
    corsair dominator gt 6gb 1824mhz 7-7-7-19
    2xEVGA GTX TITAN
    os: Crucial C300 256GB 3R0 on Intel ICH10R
    storage: samsung 2tb f3
    cooling:
    loop1: mcp350>pa120.4>ek supreme hf
    loop2: mcp355>2xpa120.3>>ek nb/sb
    22x scythe s-flex "F"

  21. #221
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    i agree totally. put that 9211 in pass through and it will smoke that soft raid it is compared to. a dynamic disk on the 9211 will torch anything. the dude configured it wrong, dynamic disk across two raid o arrays. he should have just setup a dynamic disk across all drives if he wanted to see something.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  22. #222
    Registered User
    Join Date
    Mar 2010
    Posts
    63
    Does the 9211 support SSD TRIM in HBA IT mode?

  23. #223
    Registered User
    Join Date
    Jan 2007
    Posts
    42
    LSI SAS 9211-8i P10 Firmware and BIOS Upgrade

    P10 BIOS Release Notes:

    Release Version 7.19.00.00 - SAS2BIOS_PH10 (SCGCQ00195755)

    (SCGCQ00195755) - GCA Release Version 7.19.00.00 - SAS2BIOS_PH10
    Change Summary ( Defects=15 Enhancements=6)
    SCGCQ00183922 (DFCT) - View Volume screen needs more than one ESC key press to exit screen if user has entered Manage HS screen at least once.
    SCGCQ00184126 (DFCT) - CU selection to choose a tape to put into $DR (Disaster Recovery) does not set $DR mode.
    SCGCQ00186462 (DFCT) - SAS2BIOS: Adapter Properties screen shows Package Version Field
    SCGCQ00187617 (DFCT) - BIOS will attempt to scan for devices if adapter is not in LSI mode
    SCGCQ00187715 (DFCT) - Verify command in BIOS CU continues for a drive that is pulled out.
    SCGCQ00188722 (DFCT) - BIOS: Config page error is seen when changing enclosures in Controller BIOS.
    SCGCQ00189197 (DFCT) - Server appears to hang during "Initializing..."!
    SCGCQ00189775 (DFCT) - SAS2BIOS/CU is not checking valid bit in fixed format sense data
    SCGCQ00193110 (DFCT) - BIOS hangs or asserts NMI and reboots while parsing the SMBIOS entries of some servers
    SCGCQ00193121 (DFCT) - Tape Device is displayed as a missing row in Create volume screen of Bios CU.
    SCGCQ00193310 (DFCT) - BIOS CU "Format" and "Verify" fails with some HDDs
    SCGCQ00174000 (CSET) - SHEX fix not supported for all B0 Mustang/TBolt parts
    SCGCQ00188149 (CSET) - Windows 2008 fails to install to disk attached hybrid EFI/legacy system
    SCGCQ00188702 (CSET) - SAS2 HBA is not releasing allocated memory when done with it.
    SCGCQ00192605 (CSET) - Missing SATA drive of a SATA Volume is displayed as 'Missing drive' under SAS volume and vice versa.
    SCGCQ00174966 (ENHREQ) - Provide Manufacturing page 240 information to HBA FW through config page write
    SCGCQ00175682 (CSET) - SAS2BIOS - Add Phy Number to Device Properties screen of SAS2BIOS Configuration Utility
    SCGCQ00176889 (ENHREQ) - SAS2BIOS: Add full descriptor support for Request Sense data (>2 TB support)
    SCGCQ00187157 (ENHREQ) - Additional adapter support for IO Space Write Errata (TBolt/Mustang B0)
    SCGCQ00187419 (ENHREQ) - SAS2BIOS - Supply additional PCI information for IOC Page 1
    SCGCQ00183688 (CSET) - Remove Warhawk references from baseline BIOS



    P10 Firmware Release Notes:

    Phase10 Point Release Version 10.00.02.00 - SAS2FW_Phase10 (SCGCQ00202418)
    Change Summary ( Defects=1)
    SCGCQ00200602 (DFCT) - Fault 0x100 prevents code from running at SOD

    Phase10 GCA Release Version 10.00.00.00 - SAS2FW_Phase10 (SCGCQ00198003)
    Defects=0, Enhancements=0 (Version Change Only)

    Phase10 Beta Release Version 09.250.03.00 - SAS2FW_Phase10 (SCGCQ00195554)
    Change Summary ( Defects=6)
    SCGCQ00189445 (DFCT) - Gen2 FW Ph10 - Roaming of > 2TB SATA volume drives from SAS1 to SAS2 cards, shows the volume drives as bare drives.
    SCGCQ00192386 (DFCT) - OCE is not happening as expected, after firmware upgrade from ph2.5 to ph10
    SCGCQ00192564 (DFCT) - Value FF in IO Unit Page 7 when controller PCIe link speed is 8.0 Gb/s
    SCGCQ00194668 (DFCT) - Activating a foreign volume results in 0x830A fault if a native FAILED volumewith no drive exists
    SCGCQ00193578 (CSET) - IR: Resync operation for RAID10 volume stalls when 13% complete
    SCGCQ00194924 (CSET) - Good Status reported on write command (mid execution) terminated with Task Set Full during data transfer


    Phase10 Beta Release Version 09.250.02.00 - SAS2FW_Phase10 (SCGCQ00191927)
    Change Summary ( Defects=33 Enhancements=3)
    SCGCQ00176350 (DFCT) - DATA_UNDERRUN (0x0045) IOC status is replied during IO testing
    SCGCQ00183364 (DFCT) - 5861 fault after link related parity errors
    SCGCQ00183867 (DFCT) - MPI2: IO Unit Page 10 structure incorrect
    SCGCQ00183873 (DFCT) - IOP: Fault 0xd04 when accessing config space at offset 0x19c
    SCGCQ00184430 (DFCT) - PL: Bad status frame sent (target mode)
    SCGCQ00184913 (DFCT) - sas2308eval.xsd file has incorrect default for ChipName
    SCGCQ00184927 (DFCT) - Partial NVData XML does not contain NVDataProductId
    SCGCQ00185016 (DFCT) - Fault 5900 during Task Management with SATA drives.
    SCGCQ00185657 (DFCT) - 8111h fault occurs while activating a foreign volume on an adapter which already has failed volume with all the physical disks missing
    SCGCQ00185661 (DFCT) - (NVDATA) Linux does not see all 16 VF's in SRIOV mode
    SCGCQ00185967 (DFCT) - (SRIOV-Only) SAS IO Unit Control Visibility Doesn't Fail 2nd Time
    SCGCQ00185996 (DFCT) - 620F fault on SAS read data overrun
    SCGCQ00185998 (DFCT) - SATA: NCQ read command is not completed after a data underrun or overrun
    SCGCQ00186348 (DFCT) - Update to handle large out of range max host credit setting in manufacturing page 9.
    SCGCQ00186684 (DFCT) - 9211-8i HBA connected to expander may not auto-negotiate as wide port
    SCGCQ00186699 (DFCT) - Need to use the same frame for the event and reply
    SCGCQ00187116 (DFCT) - 265D fault when posting a trace buffer after printing the ring buffer
    SCGCQ00187262 (DFCT) - Duplicate data appears at start of diagnostic trace buffer
    SCGCQ00188081 (DFCT) - 265D fault on ATAPI IO
    SCGCQ00188519 (DFCT) - SATL: Informational Exceptions log page read fails
    SCGCQ00188546 (DFCT) - Fault 7C35 seen during READ DMA passthrough commands to SATA Drive
    SCGCQ00189474 (DFCT) - SAS2 NVDATA Readme missing some information regarding HBA's 9205-8e & 9202-16e
    SCGCQ00189481 (DFCT) - SATA reads with pad bytes fail with data overrun
    SCGCQ00189509 (DFCT) - SATA: Command timeouts after read data overrun
    SCGCQ00189782 (DFCT) - SATA: Read command completes with good status after data overrun
    SCGCQ00190206 (DFCT) - ChipName in Spitfire (2004) NVDATA incorrect
    SCGCQ00190529 (DFCT) - IR Config Page accesses via CLI do not work
    SCGCQ00190839 (DFCT) - IOP: fix minor bugs for CLI to display VF statistic info
    SCGCQ00184784 (CSET) - Reception of Initiator Added event before host removed device from a previous Initiator Not Responding event
    SCGCQ00184886 (CSET) - MPI2_IOCFACTS_PROTOCOL_SCSI_TARGET is set in Protocol Flags of IOC Facts Reply
    SCGCQ00188251 (CSET) - TM-Target Reset with Srst for sata drives does not have TM_NOT_SUPPORTED set
    SCGCQ00189530 (CSET) - Sense data for the >2TB SAS drive remains in fixed format only & no change is seen to descriptor format
    SCGCQ00189777 (CSET) - 0501 fault occurs after sending "iop show cfg all" command via uart
    SCGCQ00071074 (ENHREQ) - SPV - Add Support for "REQUESTED INSIDE ZPSDS CHANGED BY EXPANDER" bit
    SCGCQ00184066 (ENHREQ) - prevent 2208/2308 FW from being installed on 200X/2108/2116 and vice versa
    SCGCQ00186584 (ENHREQ) - IR: >2TB Support: D_SENSE needs to be changed for > 2TB SAS Bare drives only
    ..............
    ..............


    http://www.lsi.com/products/storagec...AS9211-8i.aspx

  24. #224
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    ok what is the warhawk? they always reference it in different firmwares on a number of cards...i think it might be their 6gb/s switch but not sure. thx for the link btw idolclub
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  25. #225
    Registered User
    Join Date
    Mar 2010
    Posts
    63
    Quote Originally Posted by Computurd View Post
    ok what is the warhawk? they always reference it in different firmwares on a number of cards...i think it might be their 6gb/s switch but not sure. thx for the link btw idolclub
    Hi Computurd,

    Would you test LSI 9211-8i again with the latest P10 firmware and Crucial C300/C400 SSD?

Page 9 of 10 FirstFirst ... 678910 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •