Good morning, try making it private - then edit - then repost.
Printable View
Just ran a quick bench...9260-8i has a lot of headroom...;)
I'm not sure how much I believe in CDM (I really only trust iometer and maybe everest) but for comparison here is the 9211 4xacrdR0 50MB CDM -
http://img412.imageshack.us/img412/8...dr0cdm50mb.jpg
Here is 1231ML-2G 8xacardR0 100MB CDM -
http://img341.imageshack.us/img341/9...acardr0cdm.jpg
Haven't benched IOPS yet...I'll add it to my to-do list...;)
those comparisons are nicce, but could you run them with the same 50mb setting steve-o? nice results man you are really showing some great results with those cards! what i meant was with your results were those a software raid, or a hardware raid?
Lutjens, what settings are you using with the 9260-8i? those are great!
so based on these results what LSI card should I get for just 2 ssd for my bootup raid?
BTW ppl, did you notice that Nizzen is nr 2 on the PCmark vantage alltime top20? And those numbers were done almost a year ago, and is his 24/7 setup with OS installed... (i think)
Graphs comming tonight, i have some stuff to take care of first, and i also have to make some graphs for our norwegian forum. I'll see if i post a link here so you can check them for comparison if you want, i'll write everything in english anyways.
Good point in doing IOmeter testing with different block sizes, but the reason i wanted you to do a fine grained queue depth scaling is to see exactly how the RAID scales as a function of QD. In theory, the scaling should be perfect up to QD = [flash channels pr SSD]*[#SSDs], but in real life you get about 5-10% overhead pr QD on each unit, pluss an overhead from RAID administration on the controller. It is the overhead on the RAID controller as a function of RAID size and Queue Depth i wished to investigate with the tests i requested. :)
I'll make some graphs showing the scaling of IOPS and throughput by block size also.
which one is faster?
9260-4i ?
or 9211-8i?
is the 9260-41 worth the extra $100?
anything is better than this sb750 raid!!!
http://i239.photobucket.com/albums/f...Canddefrag.jpg
if you have the extra hundred, sure by all means, however, the 9211 is a helluva solution. i can hardly guide you in this though, i think it depends on the amount of devices. if it is two only, then you will be far better served with 9211. i dont know i havge yet to come to any concrete conclusions yet as i simply have not had the time to finish playing. i can tell you this, there is a negligible difference in game load times between the two. they are both so fast it really doesnt matter, ya see? unless you are going for uber setup with tons of devices 9211 will be fine. however, be aware of its limitations with only raid 0 and one. also if you are using intels you will maybe need cache because of the write issues with the intels. Tiltevros can fill you in more on that as he is testing with intels.
Note: 9260-4i is for 4 drives, not 8.
Turd, i'm glad youre on here. this is good info. My current summits have 128MB Onboard Cache on each one.
here's my current situation. I hopped on the ssds in the summer. newegg had a deal with the 60gb ocz summits for 139 ea. I couldnt pass it up. SO I naturally bought 2. these are using samsung controllers.
read here. http://www.anandtech.com/storage/sho...px?i=3531&p=22
after I got them I had jsut migrated to AMD 790FX and ditched intel but the ICH10 was a godsend compared to the raid on sb750. Ive done pages of optimizations in win7 and drivers from AMD. the performace is minimal. Ive been using perfectdisk10 along with Freespace cleaner and its been even better. for those of you in this thread that have ssds and have no idea what the hell im talking about please go here: http://www.ocztechnologyforum.com/fo...ad.php?t=64753
and here: http://www.ocztechnologyforum.com/fo...58&postcount=3
more about the drives im using currently:
Series OCZ Summit
Model OCZSSD2-1SUM60G
Device Type Internal Solid state disk (SSD)
Architecture MLC
Expansion / Connectivity
Form Factor 2.5"
Capacity 60GB
Interface Type SATA II
Features Utilizes 128MB Onboard Cache
Performance
Max Shock Resistance 1500G
Sequential Access - Read 220MB/s(max)
Sequential Access - Write 125MB/s(max)
MTBF 1,500,000 hours
so now everyone knows more about my rig.
my last areca lasted 5 years and was worth every penny. I only used 2 drives with it the whole time. a pair of raptors back then.
since then ive been looking for my next raid card to use with SSD and sataIII would be nice so this LSI 92xx is looking perfect. I probably will toss these summits eventually and get a pair of SLC's in the future.
I'm doing what the majority of us do with raid. bootable os/ games and apps on the raid. use one or 2 samsung F3 as backup drives.
****ALL I ever do is 2x drives in raid0. I doubt I will ever blow $$$$ on 4 or 8 ssd's. I'm mostly gaming or video/music editing. even a spending $330 92604i seems like a drop in the bucket compared to an areca $1200 1680IX.
if a 92604i would be faster than a 9211 4i or 8i then the extra $100 is worth it to me. I plan on having it for a while.
Basically I want the fastest lsi 92xx 8x card for 2 ssds in raid 0
is it?
this hba should more than be an improvement for you. the raid card would be nice, but with your setup and what you aim to do, me personally, go with the 9211. i am by no means an expert, i am a student of this stuff so do not take my word for it in its entirety. lets hear some feedback from other guys around here there are great minds on this forum who might feel otherwise and have convincing reason why. i dont see it personally but LOL oh well
i am doing some serious testing on this here thing-a-ma-bobby and i will have some serious graphs and results by monday. at least hold off until then?
oh well I hope newegg still has them in stock. the word about these cards is going to get out fast. Just think about all users like me that blew $500 plus on raid ssd and are using crappy onboard nvidia and amd raid. think of what the options were? go to intel and use ICH10XXX, use onboard amd/nvidia and get crap performance or blow $600-1200 on an Areca? this is without a doubt the best thing to happen to SSD raid since the SSD was born.
so am I correct by saying the only difference btw. the 9211 and the 9260 is the 9260 comes with 512MB DDRII cache (800MHz)??
would cache only benefit if you have a lot of drives connected or would it help regardless?
okay one thing to think upon however, i am looking at 9211-4i for you and it has a x4 link, that gives me pause. why the x4 link?? the 4260-8i has a x8 lkink for more throughput. but however, if you are using the 4i card with some future specs ssd's you might run into caps there with that x4 link...why in the hell did they do that?? i admit i am perplexed on that one....give me a few to look into this all documentation i have seen says x8 link maybe the egg has it wrong, but the -8i definitely has x8
thanks man. i definitely want 8x pcie. I went to LSI and read the papers on the 9211 but its definitely 4x pcie. 4x slot totally defeats the purpose of having a raid card.
Is this a card you plug into the onboard raid on say amd sb750 in my case and it takes the load off the internal raid? or is this a standalone raid card that I can hit ctrl F after post and build my raid set and install windows. browse for raid drivers for LSI and install and boot from the card? I want to eliminate the onboard raid once and for all. is this a raid card or am i totally off?
THis is what I just ordered. http://www.newegg.com/Product/Produc...82E16816118107
I think I made the right choice. I'll post my before and after results as soon as it arrives. i'm so stoked! THanks turd for finding this diamond in the rough. Maybe others knew about these cards but like most of us looking to maximize our ssd raid its been higher end $600+ Areca cards or nothing. You kinda found the holy grail for SSD Raid and were the 1st one to take the dive for us. I think you started a revolution with this thread. Give it a month and I'm sure everyone will know the LSI 92xx series. :toast: :toast2:
fyi this is going in my 2nd x16 crossfire slot.
9211 4i:
http://www.lsi.com/DistributionSyste..._PB_100709.pdf
9260 4i
http://www.lsi.com/DistributionSyste..._PB_072209.pdf
the 9211-8i is x8 though, and still a helluva deal at 250!!!! thats what is so confusing to me is why they make the 9211-8i a x8 but the 9211-4i x4....bu5t really you cant go wrong with that 9260-4i man it is sweet, i know probably ten guys with that card now, and they all love it to pieces :)
4x vs 8x really shouldn't matter... I believe its PCI-E 2.0? 4x should be 2GB/s then. We don't even know if the raid card can do this bandwidth to begin with, as I have never seen anyone with over 2GB/s yet.
Agreed.
Why would anyone be concerned with PCIe 4x 2.0? Especially when the card can only have 4 drives.
The big question kind of like what One Hertz was saying, what happens when we get 8 SSDs (spec'd @ 6 Gbps) and load them up on the 9211-8i? That will be quite interesting.
i agree that you have a point there about the x4 link, BUT that is with current gen drives. theoretically with the 6gb/s devices you would be able to saturate that bus. of course no one has done it yet, there isnt the hardware in the wild yet!
I'm guessing 2010 is going to be the year for SSDs. I think all the SSD makers are going to be going sata3 very soon so hopefully we can put that 8x to work. I always figure when buying hardware that you never change like a psu, monitor, raid cards... just get the newest technology, widest upgrade path and best quality you can afford and it will last for years. You guys know this already. I'm just excited to get it so I can set it up and see what ive been missing out on.
OMG! The admin has been busy and I have access! Tnksalot...
Where to begin ...?
Ah, I know. PCIe2x4 link.
I too found it very odd that LSI advertised(& still advertises!) the SAS9211-4i with a x8 link but the actual production uses x4 interface and link.
Here are some numbers:
PCI Express 2.0 (x4 link) 16,000 Mibit/s(2000 MiB/s) == 1.953GB/s
( PCI Express 1.0 (x8 link) is the same)
There is some overhead for the PCIe bus so the theoretical 1.95GB/s is actually somewhat less. There's some argument how much but 5% is pretty safe which yields about 1.86GBps. ithink using 1.9GBps maximum throughput before saturation level is reached is a safe mark.
I ran the numbers and for four drives, SSD or HDD, even with SAS|SATA600 interface, the PCIe2x4 link won't get saturated. Approximately 200MiB/s is the upper limit for the fastest HDD drive. Using 8 drives, with expander, at that rate only produce a total 1600MiB/s(1.56GB/s) which is still well under saturation limits. So PCIe2-x4 link is good enuf for up to at least 8 HDD (via expander of course).
HOWEVER, that's not true when the drives are setup for RAID and an expander is used so more than 4 SSD are sending data to the bus.
The fastest SSD cannot get data on the bus faster than 300MiBps(~293MBps) due to the SATAII interface. Four SSD is not a problem (1142MBps=1.1GBps) but 8 SSD, in a perfect theoretical environment w/ RAID & SAS600 expander, could do 2344MBps or about 2.3GBps and that is well over the saturation point of 1.9GBps for the PCIe2x4 link.
I'm pretty sure that LSI made the 9211-4i with PCIe2x4 for the sub-market of those who have PCIe x4 slot but no x8 available. I think they also assume that anyone willing to spend more than $5000 for 8 SSD, expander and cable(s) is not going to buy the cheapest adapter just to save less than $100! ...
...anyway that's my input on why there is a 9211-4i with PCIe2x4 link when the others are PCIe2x8 link.
...
Since I always use RAID 10|1E & only run RAID 0 or 5-ish for some interim emergency the 9260 is huge overkill and I got the 9211-8i. My 9211-8i should be here next week ... I hope.
I just went ahead and got the 9211-8i today as well. You really can't beat the price!
Now to decide on whether to buy some Intel drives or wait for some Micron C300's. :rocker:
yeah man integrated raid seems to be the way to go right now...
The 4i is basically for people that want to use no more than 4 SSDs. We won't reach 1.9GB/s with 4 SSDs for at least another 2 years.
I really wish they would fix large file performance on the 9211. I am doing a windows 7 install soon and I cant decide whether I want to go with my 9211 or ICH10R. Games load just as fast on both solutions for me, but at least ICH can do 660MB/s on large files.
They might be a little hard to come by since the target is OEM.
" The Micron RealSSD C300 will be OEM-only, going after the same markets as Intel, Samsung and Toshiba. Crucial, Micron's retail arm, will sell direct to retail.
The Crucial RealSSD C300 will be available in two capacities: 128GB and 256GB with 7% of the NAND capacity used as spare area. ..."
RE: New AnandTech article(short):
CES Preview - Micron's RealSSD C300 - The First 6Gbps SATA SSD
( http://www.anandtech.com/tradeshows/showdoc.aspx?i=3712 )
this just in from LSI via email...
Paul,
New firmware should be posted either this week or early next week.
Best Regards,
Michaela Koert
Technical Support Engineer
Global Support Services
LSI Corporation
3718 N. Rock Rd.
Wichita, KS 67226
Phone: 1-866-625-3993
Fax: 1-316-636-8373
Email: support@lsi.com
ALSO./... i have yet to see an increase in performance on these next gen ssd (vertex2 /sandforse /micron real ssd c300) that would justify dumping my current drives for them instead. i am not going to go 6gb/s ssd for just the sake of doing it. the device will have to show a tremendous increase in performance. i am banking on jetstream from iddilinx in Q2....nothing else yet has made my jaw drop. and the compression on the vertex 2 gives me pause. i think i will have to see real world results on that to be convinced.
Paul, Is that firmware for all the SAS600 adapters, 9211-8i included?
...
For anyone that's interested, Provantage has the 9211-8i back in stock -only 72 of them - for 233+ship (~$238).
http://www.provantage.com/lsi-logic-...4~ALSIG07U.htm
That's the LSI00194 model; retail but no cables included.
it is for the 9211 series as a whole.
Yea. Weird the way they did that too cuz the 9260 has a different set and the cables for all the SAS600 kits are the cheap kind too yet the charge for them is more than buying the better cables separately. It's weird ...
I need 8087(host) -to- 8484(target) and|or 8087(host) to SATA fan-out(target) cuz I'm using the XCLIO SS014 and M14TB.
I got cables from Buy. since I was buying the SAS600 drives there but SCSI4ME has the 8087 to SATA fan-out for just $16 + ship.
My 9211-8i shipped ... :).
wanna talk about irritating cable issues, when i first got my arc-1680ix it came with sas-sas cables, being such a noob i didnt know i had to order fanouts, waited five excruciating days with 8 vertex staring at me...then i found out i had ordered the wrong ones, they were " from:target To:host", so backwards essentially...then had to wait again.:down:
Hi guys, long time no see :)
I saw these on newegg. Are they are differnt from the ones posted above but will they do the trick? They cost about $10 less http://www.newegg.com/Product/Produc...82E16816103196
http://www.newegg.com/Product/Produc...-112-_-Product
this one paired with the less expensice 9211-8i (Single Pack) $249 +24.99 = 274.98 as opposed to the 9211-8i retail which goes for $329.99
You still save $55 doing it this way.
I sent back the 9260-4i in case it was a bad card from the start. I haven't been in this thread for a while since I was traveling for work. whats the lastest news? Any updates on that 9211 FW? I'm curious to see some performace comparisons to the old one.
Those carry sideband signals(the extra wire...) which needs support by the initiator(adapter). I don't think the 9211-* has it; nothing in the literature about it. The 9260-* might support it but since LSI does not provide cables w/ sideband in the kit I kinda doubt it. Sideband signal carries information about the drives -not extremely important to most applications.
Since it is connected to the SFF-8087 connector it could cause incompatibility if it is not supported by the adapter. Then again it could be just a null...
I still would not risk it without definitive info from LSI one way or the other.
http://www.newegg.com/Product/Produc...-112-_-Product is $256 shipped. Only ~ $239 at Provantage. Brand new adapters too cause they just got a supply in on Friday ...
I like Newegg -they are "Eggcellent", :), but they are missing the boat on the SFF drives, SATA|SAS600 adapters and accessories. iguess their stock of 3.5" SATA 1|2 stuff is just too big...
oh sweet. So these cables you posted for $16 are the ones? 239+16 $255 total sounds amazing. right now I'm on newegg looking for a new board for this. I am torn because I have 2 x 2gb kits of ddr3 for high voltage and a good eplida 2 x 1gb kit for benching. these are all spec around 1.8v. which was great for 775 but for 1156 and am3 its too hot. I've almost had it with AMD at this point but I have 2x x2 550 and an x4 965 c3. then I have these summit drives here which are probably ready to go on ebay. I could get a cheap open box am3 board on newegg for now until all the new chipsets come soon out and keep using the 965 c3 and the summits or just sell the summits, all the am3 cpus, the board, and buy a cheap socket 775 chip and run it in a rampage extreme I have mounted the wall as decoration. or I just go back to the crappy raid performance on the sb750 until something better comes along. The problem is I want to sell this hardware while its still worth something. man this is probably the worst time to be buying hardware cuz all the new tech is days away. what should I do? anyone have a 775 chip for me for this rampage extreme?
EDIT>>>
newegg got the card and are shipping a replacement 92604i. I also just scored a pair of brand new 64gb ultradrive ME for $350 bux which are identical to vertex series. -- the summits are going on ebay tonight.
I read somewhere that the extra sideband wire shouldn't affect the HBA (9211). The connections are there just in case the hardware supports it. Either way, I can try it today or tomorrow. I have a few cables that have the sideband wire, and I'm gettting the 9211 today.
I have not read the SFF-8087 or other SAS cable specs(derelict in my duties, iknow ... :rolleyes: ) so really do not know if the sideband is part of it or not. It could be an amendment ...
I do know that manf and retailers lie a lot so it could be either way and we would not know about it. :cool: :up::clap:
Well, it will be over $255 ...probly closer to $280-$285 total. It is ~238 + change to get 9211-* here but shipping to you might be a little more or a little less. The cables at SCSI4ME are SFF-8087(host) to SATA fan-out for 16+shipping but need two of them for the *-8i adapter: $32+shipping. idunno what the shipping charge w/b. I only rem' that I could get the sleeved Adaptec cable for about a few bucks more from Buy. cuz there was free ship with them. (*either way they cost way too much IMO ...). Since I'll be using 6GHz rate for the SAS600 drives & adapter I felt like the (supposedly) better cable was important. SATA300 interface like on the SSD or SATA2 drives uses 3GHz so any "3Gbps" cable should work without crosstalk or other issues ... but, honestly, idunno if it matters one way or the other. None of those people state what the cable spec is for the cables so the $16 cable might be as good as the $30 cable. The only thing I know for certain is that the $30 cable is prettier, :), and that since it it sleeved, it won't cause as much turbulence|restriction of the air flow inside the chassis.
They both work for adapters(the "host") with SFF-8087 connectors and drives(targets) with standard SATA connectors that are individually connected or in a cage that has individual connections(such as the XClio SS014).
If the drives are in a cage like the S'Micro M14T, which uses an SFF-8484 connector, it won't work. There is a different cable for those(SFF-8087 -to- SFF-8484).
So will the SCSI4ME 16-buck cables work? BTHOOM, ;). ...depends on the setup.
Got my 9211-8i today.
My old Adaptec cable that has the sideband cabling works fine with the new HBA. I had it attached to an old Raptor drive. So yes, these cables will work.
I have a pair of old highpoint fanout cables that work just fine on all the controllers I have tested.
Decent deal through Amazon as well... Shipping... now that's another question. :)
SAS9211-8I 8PORT Int 6GB Sata+sas Pcie 2.0
Sata 0.6 Meter Breakout Cable
Hey guys is anyone on here using a 9211-9i or 9260 with a pair of intel x25m g2's? I want to recommend this card to a friend that is on x58 on 2 gen1 x25m 160gb on intel raid0. I was telling him to get the 9211-8i over the 9260 since its about the same performance for $100 less plus the new 9211 FW is coming out...
this is the email I sent him:
good advice?Quote:
hey man about the LSI card I told you about.
You want this:
http://www.amazon.com/exec/obidos/tg...KT8F1&v=glance
and this. http://www.amazon.com/exec/obidos/tg...5J8LR&v=glance
or get this and the cable I posted above.
This is what I have:
http://www.amazon.com/SAS9260-4I-Rai...3261978&sr=8-2
its practically the same thing except it has the 512MB DDRII cache (800MHz)
If I were to do it over again I would get the 9211-8i and save the $100 since you aren't going to flood the controller with tons of drives.
fw is out.
9211-8i: http://lsi.com/storage_home/products...-8i/index.html
At bottom of "support& downloads" section.
Anyone know what are the firmware changes? The text file is instructions -no changelog.
i noticed that it is just instructions and no changelog, which sucks cause i gotta switch controllers to test....oh well i have good data to compare against so we should be able to see..
Is it the one that's dated 12-NOV-99? That's the only one I see.
yea, before they didnt even have a revision out. i will be installing array later today..
It is just the SAS Flash utility & BIOS stuff that is from Oct & Nov. The FW, 2118ir.bin, is dated 23 Dec 2009.
Note that LSI stated to me in emailI won't be upgrading until at least they tell me|us what the heck the changes are.Quote:
Originally Posted by LSI
how do we flash the bin file? i update the bios from the megaraid storage manager but i cant flash the FW... help anyone??
its not easy LOL
flash bios from within MSM
you have to use a CLI to do the firmware, its the only method.
(i had to call support to get it to work)
take the contents of FW file and place in a generic folder on the desktop
then place the executable file (sas2flash.exe) from the sas2flash folder that corresponds with your OS in the SAME folder
open CMD prompt that is elevated to administrator status
inside commanbd prompt navigate to folder containing the FW and the sas2flash file
type in (sans quotes of course)
"sas2flash -f 2118ir.bin"
good luck any questions hit me on messsenger on kansas time!
ok here is some results from the 9211i8 with the new flash...
the first and the second image is random and seq @ 4k file with queue of 1
the 3rd and the 4rth are 4k file with 32 queue
15,7MB more in seq and 9MB in random.. thats a quite good improvement :D
now the big big big improvement for me is @ 16k files.... i dont know how they do it but i have 250-300 MB more..... in seq. files.... now my score in 32k files seq. is 1618,90MB/s from 1,3-1,4GB/s
now for 64k files my score is the same cause of the limiting of HDD so no problem in that..
looks good..i havent had time to test yet, got os ready with programs installed...
NICE TILT paul get in the game man! I'm waiting.. I'm flashing the replacement 9260 i'll be back soon.
paul beat this 1702 MB/s 64k @ 32que
0,0624 ms @ 4k file 1 queue
Tilt, if you or paul can give me the steps to config iometer the same way you are so we're all on the same page I would appreciate it. I have never used it before this. I'm used to clicking crystal disk mark and getting a beer from the fridge and hitting alt/printscreen ctrl c/
i went and flashed the 9260 and it went so smoothly on the rampage ex. I being on that damn am3 crap since summer I forgot how computers are supposed to work.
I'm gonna call M.Koert at LSI tomorrow and ask him to join xs if he hasnt already. I think we should join forces with him directly to max these cards out and exchange feedback tips and board/drive compatibility. In exchange he can flow us some beta bios's to play with and some free hardware to test. Everyone wins. .. LSI gets top tier benchers squeezing every last drop of performance out of their hdwr and they say thanks and flow us some new toys. at least Paul(computurd) that man deserves some freebies. I'll vouch for you man.
on another note. do you remember those got milk commercials they did awhile back and show someone with a gigantic chocolate chip cookie and they run out of milk? I feel like that guy. I have my replacement 9260, a new i3 530 but the board, ssds and eco ram doesn't arrive till monday. I'm currently on a 1.6ghz pentium e2140 775 a rampage extreme running win7 64 on a single raptor. After that brief moment i had the ssds and lsi actually working last week compared to this crap I feel like I'm in the special olympics.
@ trans am...well as to instructions for iometer YHPM...
tilt your performance with that card continues to excel way past mine...that rocks man, that latency is hard to match, as a matter of fact with my old trusty NF200 in the way it will be impossible :) right now at same QD with 4k it is .0855 so you are really kicking my arse there!
paul get back in here man its sat night? lets see that new fw bench
thew new firmaware has done nothing to fix my issues with scaling and my array. it is very irritating. not to say that the performance isnt great, however it could be better. i am very demanding user :(
I'm sorry i haven't been active here for a while, i have now read up on all posts and am up to speed.
I have a theory on why the IOPS performance hits a roof around 70-75K IOPS. It's probably the 800mhz powerPC controller that becomes the bottleneck from further IOPS scaling.
As to the large block sequential performance hit, i have no clue. It SHOULD be possible to fix in firmware, and i can't imagine it would be too hard.
Also, OCZ has added TRIM support to its newest Z-drive through custom drivers and firmware for the RAID controller. LSI should also be able to do this for the 92xx series, if they aren't confident enough to release it to enterprise users, they could at least release a beta version for us (the ones that have the controller anyway) to play with.
I support the suggestion of getting the guy from LSI to join the forum and use this thread as a sort of alpha/beta testing ground with feedback. I think LSI could benefit from using us, and it could speed up the development of good drivers and firmware for enthusiasts, wich will be a good market for the 92xx series.
I'm considering getting a 92xx controller myself, but i'm waiting a bit for Areca 18xx and the newer generation SSDs about to enter the market.
it is actually a 533 mhz power pc..i had come to the same conclusion about the iops scaling, however tiltervros experiences and results with the intels has convinced me otherwise. i think the problem is with the 1.41 firmware i am using update is out today so we shall see, i am puttimng 1.5 on my drives soon.
You're right. I forgot. I read a few articles on the 9260 wich has 800mhz.
One of the things that ought to make the 9211 cheaper is the lower frequency of the controller.
Still, if you look at it from a practical perspective, 70-75K 4KB random IOPS = 280-300MB/s. Anything even an EXTREME user would do in their wettest dreams would not be bottlenecked noticably by this. When you also take into account that the IOPS limit stays the same for larger blocks meaning 560-600MB/s for 8KB blocks, and 1120-1200MB/s for 16KB blocks, you can disregard this limitation of the controller for all practical scenarios.
What is more interresting is to check out the average accesstime and maximum accesstime for queue depths 1-128, or more related to practical scenarios QD 4-64, or even QD 8-32.
From a theoretical standpoint, an 9211 with 8 x25-M at QD 32 should be able to deliver accesstime comparable to 1 x25-M from ICH10R at QD 4. This will translate into what you call "CPU acceleration", causing the CPU to have less "wait" cycles while the blocks are fetched, and reducing experienced latency throughout the system.
This should be fairly easy to test out preliminary by benching 1 SSD from ICH10R at queue depths 1, 2, 3, 4, 8, 12, 16 and then benching 4 of the same SSD from 9211 (4i or 8i) at queue depths 4, 8, 12, 16, 32, 48, 64.
EDIT: Or bench the 1 SSD as non-member on 9211! Perhaps single test on both ICH10R AND 9211? Or even both single and 4xR0 on both controllers to look at accesstime and IOPS scaling?
What do you guys think?
if u see from my results in LSI on TYAN m/b u will see 4k file sequential @ 1 QD is like 0,065ms - 0,064ms and i have 7 intels on 9211.
But yeah i will try to do the tests with 7 drives for fun
Tiltevros, i'm talking about random accesstimes here, not seq. I hope you do the tests you said you will try with 4KB random access pattern. 4KB sequential at higher queue depths will likely have a lot lower accesstimes, but you won't see much of 4KB sequential in real scenarios.
yes tilt lets see some results! i have test results from 4k on the 9211 that i have posted elsewhere, with eight vertex though....maybe you could give some insight into your idea from this gullars?
http://i517.photobucket.com/albums/u...bootsarray.png
http://i517.photobucket.com/albums/u...tssarray-1.png
paul can you go to the 9260 thread quick and see me on gmail?
Computurd, i'm confused if you mean what gave me the idea, or what i think we can do with the material.
I got the fundamental idea earlier when contemplating vertex vs x25-M IOPS scaling when i made the graphs i have linked here earlier of scaling with QD. Specifically when i made the the graphs average accesstime by QD, and IOPS/average accesstime by QD.
Basically, the more flash channels you have, the higher QD you can have before your accesstime starts to increase. Once you pass a certain number of channels and/or devices, the controller becomes really important to how much penalty you get by increasing QD even when QD < #channels.
The purpose of gathering this information is to see if various controllers are a good fit when the goal of the RAID array is to act as a CPU-accelerator (or system-accelerator) and not only to increase the storage throughput.
When looking at this, it is also relevant to compare the SSDs being used to HDD arrays and even RAM-SSDs. As far as i can tell, at low queue depths RAM-SSDs are best as they have the lowest accesstime, but once you pass #channels the latency doubles as you double QD. This means that while an ACARD SSD may own x25-M at QD=1 and QD=2 when used in dual-port mode, it doesn't have more channels and will double latency with queue depth, while x25-M has 10 channels and will scale much better with queue depth untill it overtakes the ACARD at average latency (and IOPS).
I suspect the LSI 9211 is good at delivering low latency IOPS in integrated RAIDs compared to other cards since it does not concern itself with caching or other fancy features. By testing the queue depths i suggested above, we can see how much latency-penalty (how far from perfect scaling) the controller gets by adding more channels and increasing the queue depth accordingly. We can also examinate how much is to gain at certain queue depths by increasing the number of units in the RAID.
If you test a single Vertex at queue depths 1, 2, 3, 4, 8, 16, 32 we have the corresponding data from your screenshot above if that is 8x Vertex R0.
@ gulllars ----> http://www.xtremesystems.org/forums/...d.php?t=243725
well i dont think there is a need to run those numbers on a single ssd. there isnt a ssd in the world that will get even close to those numbers^^^^
it is simple
4k 100 percent random 100 percent read. then adjust outstanding i/o , one worker .
we run them in messenger together all the time to test results so i know how he formats it, same as me :)
NEW WINDOWS DRIVERS RELEASED TODAy
new FW and driver for win7 has got me up and running raid 0 is scaling MUCH better, hit 1300 in sequential with quick test, havent got over 1050 previously. will post with more later today WOOT!!
Triple post FTW!!
I see tiltevros, thanks for the numbers. At 4KB random you hit the IOPS roof at QD=24, at wich point you get 72928 IOPS @ 0,3001ms and 0,7904ms max. Those are really impressive numbers!
What also impresses me about this controller is from QD=1 to QD=10 you only gain 0,0233ms average latency.
It's also really nice you are able to exceed 1000MB/s @ < 1ms for 16-64KB random.
Now i would just like the numbers of a single x25-M on 9211 for comparing the scaling of multiple devices (the integrated RAID overhead). From what i can tell by just looking at the numbers, you have great scaling untill you hit the IOPS roof. For the larger packet sizes it seems the scaling reaches diminishing returns around QD = ([Devices]*[10(channels)])/([block size]/[4KB]), wich is where you start to see channel saturation.
Computurd: The reason for benching a single SSD of the same type as the ones in the RAID is to compare the scaling of a single drive by QD to the scaling of the RAID by QD, and do [RAID latency]/[singledrive latency] for the same QD/#drives. This directly translates into RAID overhead. If you take the number you get, subtract 1, and multiply by 100 you get the % overhead.
EDIT: BTW, the site dump.no has been taken down, so the files uploaded there are no longer accessible. The site was shut down due to lack of funding.
So stevero, if you have hosted the spreadsheets anywhere else i would love a link ;) (i recently installed W7 on both my laptop and gaming rigg, and lost the benching files in the process)
Gul, I think I still have the files but I do not have a place to post them.
Thanks!
NEW LINUX (PH4) DRIVERS RELEASED too... ;).
Sneaky LSI guys did not change the dates, yet. (ED: that's cuz the dates are correct this time: November 2009. => not "new", just new to public|website!) )
9211-8i Link: http://lsi.com/storage_home/products...-8i/index.html
I hope this resolves some issues w/ compatibility.
...they need new|updated firmware for SAS dual-port usage too. The adapter sees 'em but does not know it is same drive. It allows me to try make a 1TB array out of four 147GB(136GB real) SAS2 HDD. (... I wish, :rolleyes: )
Stevero, how about adding the raw .csv files to a zip archive and uploading it to this site as an attachment to a post? It's next to the smiley face when you post full replies.
Offtopic:
Gul = yellow in norwegian.
My nick is GullLars (by looking at caps, Gull Lars).
Gull = gold (also used as slang for luck) in norwegian.
Lars is my first name.
The nick means either Gold Lars (as in gold winner) or Lucky Lars, it's also a joke reference to a TV character named Gul-Lars (yellow Lars, a yellow human-sized bird in childrens-TV). Usually it's meant as lucky Lars, as it's my gaming nick, and people always accuse me of "just being lucky" when i have a streak.
1231ML-2G files attached.
Computurd (or anyone else using the 9211), could you tell me the controller init time at bootup (ie how many seconds it adds to the booting process?)
Now I normally don't reboot at all (24/7 cruncher after all), but I test new CPUs every so often which means a LOT of reboots. Now if this card adds 10-15s I can live with that, however if it behaves like an Adaptec and adds 45s it would be a no go for my mainrig.
Thank you :)
Damn.. just over my "limit" then I guess. Is that verify DMI pool data hang specific to a certain motherboard maybe?
This is why I loved the Highpoint 3510/3520.. just under 8 seconds total :)
jcool,
Why the limit?
I reboot once a month or whenever Windows Update needs a reboot.
I've got 2 raid controllers on my main rig, LSI 9260-8i and a PERC 6/i, sure it takes some time but then again I don't boot that often.
Read ;)
Ordinarily I never reboot (like once a month for updates), but if I am testing new CPUs I do 100+ reboots in a single day. Waiting more than 10-15s for a Raid card can get REALLY annoying then.
:)
A simple solution to you problem would be removing the raid card while you test the new cpu's :)
It is an option, it might not work for you though.
Yeah, it would require the OS drives NOT to be on the Raid card though, which kinda defeats the purpose.
Or, I set up an SSD/HDD specifically for CPU testing. Might be better considering I risk tanking my main OS all the time :wasntme:
Get the Intel X25-V 40GB it is a great boot drive.
I've been buying a few of those lately and it would fit the bill.
Yeah, I think I'm gonna wait for the new SSD gen though.. I'm thinking 4x Micron C300 @ LSI 9211-4i or something to replace my single CSX Indilinx MLC SSD. I'd use that one for CPU testing then :)
Good thinking,
The C300 is first in line for me as well but the SandForce sounds tempting too.
I'd say ~12 sec's here.*
It depends on how the adapter is configured and a great deal on the system BIOS as well as system config. My AMI BIOS adds an additional ten second + wait before loading the boot menu after the adapters' BIOSes are finished. I too do not normally (re)boot but have been a lot recently. That extra 10+ sec drives me crazy ... I am not a patient man, ;).
Newer mobo and BIOS should, ihope, fair better.
*After timing it I have to say that's a best time. A 15 to 30 sec. wait is more typical and then there's another BIOS wait of ~10-15 secs while the system BIOS and booltloader spin their wheels. End result on this Supermicro H8DCi (dual opteron) with only four SAS drives in one LUN on the 9211 is 25 - 45 secs.
12 seconds sounds fine.. hmmm. I'm curently using a DFI mobo (Award Bios) but will be switching back to my E760 classified soon (Award as well, but a slow booter...)
I really like the bootup post speed of the DFIs, can hardly read anything before gettng to the Windows Boot Logo :)
Configuration would be 4x SSD in Raid 0 ofc, single volume.
If I remember right, my 9211 took close to 30 seconds. It's still better than my adaptec card, but nothing like booting a PC without a raid card.
Off topic, I agree with the Intel 25-v 40G as a boot drive. My pc boots to the Win 7 desktop in 14 seconds from power up. :up:
Yea well the LSI 9211-8i part of the boot will take longer if the system is being crashed, gets locked and requires hard reset. ;) Then is prbly closer to Spoiler's value ... 20-25 secs than to 10-15. Both the system BIOS and the adapter(s) BIOS will run longer. I know this because, among other reasons, I have been doing that for the last few days due to a borked Linux OS boot, :rolleyes:.
For crashing systems, whether with CPU tests or perhaps OS or other software modifications that will cause such, a BBU as with the 9260-* would be advantageous. Otherwise, prbly should use a separate system dedicated to such tests and do not keep any valued data on it.
In any event, configure the adapter for the target. E.g., if only LUN 0 is being used, no need to scan for all the rest of the stuff that the adapter can handle, so configure it appropriately and it will knock-off some time.