I spaced for a sec, I read the post and saw 940 but I asked because I'm looking for a 939 server board with pci-e and pci-x and I didn't see a supermicro or tyan board that did
Printable View
I spaced for a sec, I read the post and saw 940 but I asked because I'm looking for a 939 server board with pci-e and pci-x and I didn't see a supermicro or tyan board that did
s939 server board? :wth:Quote:
Originally Posted by Hassan
well I have a few extra Opty 170 939's and i was going to get the Tyan 2865G2NR http://www.tyan.com/products/html/tomcatk8e.html for a customer. It was between that or the Supermicro H8-SSL-R10 http://www.supermicro.com/Aplus/moth.../H8SSL-R10.cfm. One has pci-e the other has pci-x
Quote:
Originally Posted by LexDiamonds
I think that is far from the truth that that plainly helps. Its very dependant on alot of factors and not a good "rule of thumb".
Obviously, there is no "optimal stripe size" for everyone; it depends on your performance needs, the types of applications you run, and in fact, even the characteristics of your drives to some extent. (That's why controller manufacturers reserve it as a user-definable value!) There are many "rules of thumb" that are thrown around to tell people how they should choose stripe size, but unfortunately they are all, at best, oversimplified. For example, some say to match the stripe size to the cluster size of FAT file system logical volumes. The theory is that by doing this you can fit an entire cluster in one stripe. Nice theory, but there's no practical way to ensure that each stripe contains exactly one cluster. Even if you could, this optimization only makes sense if you value positioning performance over transfer performance; many people do striping specifically for transfer performance.
http://www.storagereview.com/guide20...erfStripe.html
I find storagereview's guides and educational material very helpful.
Finding a good stripe/cluster size for your setup is the hardest part, i tried pretty much every combination and they all tested the same give or take a little, so i just gave up and use 64k stripe and 4k cluster, which i think storage review recomended for everyday usage in a desktop.
I think stripe/cluster choices come into play more in a server than our desktops.
Quote:
Originally Posted by epion2985
Yeah that would be great but buying that mobo means shelling out 500 bucks plus the 940 cPU plus ECC ram. With AM2 and Conroe coming out not sure it's the smartest idea investing big money on a system like that.Quote:
Originally Posted by s7e9h3n
here are some pics of 16/16,32/32,64/64...in that order:
http://s2.supload.com/thumbs/default/16_16.bmp
http://s2.supload.com/thumbs/default/32_32.bmp
http://s2.supload.com/thumbs/default/64_64.bmp
ummm am2 or conroe? ECC DDR2 is not that prevalent yet and am2 or conroe aren't multiprocessor, maybe multi-core, but you can't even compare those platforms with what s7eph3n reccomended. Moreso, I'm not going to tell a paying client to wait 3 months for something else. But hey that's me.Quote:
Originally Posted by SamHughe
Anyway my Areca 1210 should be arriving today with Battery Backup, I'll post before and after with 4 Raptors 36G raid0 nvidia vs areca.
I get this with DFI CFX3200 and 2 X Raptor 150GB in raid0 16k/4k controlled by ULI M1575:
http://koti.mbnet.fi/mtrepo/HDTach/H...ULIECached.jpg
That was with X2 4400+ @ 10 X 280MHz. The HDD performance seems to increase consistently with decreasing cpu/HT speed. With 10X180 underclock i get this:
http://koti.mbnet.fi/mtrepo/HDTach/H...underclock.jpg
PCMark05 tells the same story. 7.4k @ 10X310, 8.1k @ 10X220.
:confused: What is going on here? Help appreciated.
well Hassan if you can beat this HD tach I'll be surprized. I took this when I installed my 1230 with only 256MB of cache (the 1GB is on the way). this is with seagate 4x500GB using raid5. now i can finally get rid of the 1210. the volume name is the same because I originally made the array on my 1210.
I'll see, just waiting for the ups man right now!!
Quote:
Originally Posted by Hassan
Looking forward to that BIG time!....:woot: :clap: :banana:
Quote:
Originally Posted by Matuzki
Thats definately strange...I would try and contact: me2@georgebreese.com
george helped me out years ago when the kt400 chipset came out...he designed a few various raid patches that fixed alot of glitches...
http://www.georgebreese.com/net/software/
this is where I used to go to download tests and what not but he is the man when it comes to raid!:woot:
I second to that. I'm also waiting my HighPoint 2310 (PCI-E x4) sata controller card to arrive. I'l post my results with 4x 150gb raptors on this card (if everything works ok).Quote:
Originally Posted by Grinch
What stripe sizes and cluster sizes are you guys using with your Windows XP arrays? I've been digging up some information cause i bought another WD2500KS to set up in a RAID0 array.
This is what i've found so far:
http://discuss.futuremark.com/forum/...&fpart=13&vc=1Quote:
I would like to update this RAID0 guide with my experiences, I have used RAID for a few years now, and have no problems at all.
Upon installing WinXP you will find the average file size to be 373kb, the general rule is correct, you divide the file size by two, and go to the next lowest setting:
373/2=186.5 next lowest 128k
I tried experimenting with file sizes and found the following:
16k, seek errors quadrupled, slow file access, slow loading times, and HD Tach Benchmarks showing inadequate HD performance, circa 37MB/sec sequential read (the speed of one of my single drives) I rechecked the settings, I had indeed made a RAID0 partition, but I figured the problem lay with the file size and accordingly RAID structure.
32k, Seek errors halved form previous setting, OS was faster, loading times reduced, benchmarks improved etc, but still not as fast as my previous default of 64k file size.
I had already tried the 64k file size, and had been having what I had thought good results at HDtach showing 60mb/sec sequential read, few seek errors, good loading times etc, so I tried 128k
128k, OMGWTF, load times blisteringly fast, HD Tach Benchmark at 80MB/sec sequential read, burst at theoretical max of 92MB/sec (ATA 100), games load times cut in half, hardly any seek errors... like I said OMGWTF
So naturally the urge to fiddle further had caught me.......time for 256k/sec
256k, Load times same as 128k, HDtach showed 78MB/sec sequential read, but.........HDD drop off occurred a lot earlier, which is slower than 128k, and here is a kicker, burst speed 85MB/sec, and the seek errors rose quite a bit, so I did the only thing that was left to do, went back to 128k.
Filesize is critical with RAID 0, a standard brand new WinXP installation has an average filesize of 373kb, so the RAID stripe size should be 128k for WinXP, this should be an industry standard, but as per normal things like RAID take a few years to catch up.
But RAID Results should also not be done on a full drive, that can lead to spurious and false results, if you ran a test on a full drive, received results as 2MB (mentioned previously), and then you install a RAID 0 with 2MB stripes, then you have lost a whole wad of your hard drive, as each strip is for one half of one file only, that is why you should only use a standard and new installation to find the OS stripe size, as any files that are added later will conform to the stripe size set, and having the wrong size will cause HD space wastage/overtaxing, which is not really good, and you will not receive the performance you should get.
So my advice is, if you are using WinXP make your stripe size 128k, and leave it as that until the next new OS arrives.
All these tests were done on a Silicon Image standalone RAID card with a Sil 0680 chipset, with the newest BIOS and drivers.
What cluster size would you use with a 128k stripe?
In the first 2 pages of this thread, cluster/stripe sizes are mentioned in detail.Quote:
Originally Posted by burningrave101
I already know what strtipe size i want to try. I'm just wondering what cluster size to use with 128k. The first two pages just suggested using the same cluster size as the stripe size. I dont think that will work as well for 128k.Quote:
Originally Posted by Haltech
http://www.storagereview.com/guide20...erfStripe.htmlQuote:
For example, some say to match the stripe size to the cluster size of FAT file system logical volumes. The theory is that by doing this you can fit an entire cluster in one stripe. Nice theory, but there's no practical way to ensure that each stripe contains exactly one cluster. Even if you could, this optimization only makes sense if you value positioning performance over transfer performance; many people do striping specifically for transfer performance.
So what cluster size would be best for a larger stripe size like 128k?
EDIT: Here is some more information on why i think 128k stripe will offer better real world performance.
http://www.overclockers.com.au/artic...?id=179581&P=2Quote:
Intel is offering to arrange the RAID 0 array with 4, 8, 16 32, 64 and 128 K stripe sizes. They are recommending 128 K for best application performance. Intel is also explaining that a smaller cluster size than 128 K gives a better transfer rate. We found out that this is exatly true. As shown in the attached screenshot, RAID 0 based on the ICHR-5 shows the best serial transfer rates in HDTach and Sandra with a stripe size of 4 K, exactly as predicted by Intel. However and much more important: as shown by Winbench 99, the best performance with applications is achieved with a stripe size of 128 K.
It is widely understood that the disk transfer rate is indicating HDD performance and many users indeed take it as THE sole benchmark for HDD performance. Well, it seems that this can be a misleading approach. As shown by our Winbench 99 test results, the best performance with applications is achieved with a stripe size of 128 K - but that is returning a significantly lower transfer than a 4K stripe size. This shows that the transfer rate can be a misleading indicator for disk performance and should not be taken as the sole HDD benchmark. Applications are accessing the file system on the disk in a different manner to how serial transfer rates are tested. In the later case the disk header is reading and writing either a fixed file size or a small number of files sizes to and from given spots with in a linear fashion from the beginning of the platter to its end. That is not how applications are accessing the file system on the HDD. In this case files of continually changing sizes are read, written and moved in almost random patterns.
http://forums.cluboverclocker.com/ar...hp?t-2020.htmlQuote:
the stripe size is the amount of data that each hard disk has to read or write at any one time.
So in a RAID 0 with a 16K stripe, the first drive will be sent 16K and immediately thereafter the next drive will get the other 16K.
When working with small files, it is often best to use a smaller stripe size because:
1. it saves space (because if you have a 4k file, it fits within 16K and the other 12K is wasted).
2. You can grab a lot more small files faster if you don't have to read a large stripe size (like 128K, which is 8 times bigger).
3. A smaller stripe size can give better sequencial transfer rates for a single task, i.e. it shows up well in benchmarks like hdtach.
When working with large files:
1. A larger stripe size means less time is wasted going in between each drive requesting the next cluster of data. So if you are editing a multi-gigabyte file, it's easier on the hard drives to have a larger stripe size.
2. A larger stripe size usually gives better system performance in multitasking situations (benchmarks that open many programs at once show improved performance)
Recently I have been using 128K and my multitasking ability is better than when I was on 16k. For example, I can upload @ 2MB total per second divided between 200 different sources on emule, while I still watch a 6MB per second mpeg2 tv recording that I made using my tvcard. That kind of activity is really tough on a single hard drive and probably only possible because of RAID. Imagine having 200 different people requesting a random portion of a random file while you also use the same hard drive to run the OS and watch high quality tv.
With the latest corruption of my RAID, I think I am going to go back to 64 or 32k and try it out, because the transfer rates were slightly higher when I did dvd editing. i.e with a 128K i would get 500fps in processing a dvd (writing it back to the hard drive after editing it) and with 16k i remember getting somewhere around 600-700fps. although my overall system performance was slightly slower.
to sum it up, it depends what you plan to be doing with your computer. It also depends on the type of hard drive you have. Some hard drives work better with different cluster sizes. RAID stripe sizes can be anywhere from 8k to 4MB depending on your RAID controller's capability. My own controller has 16k, 32k, 64k, and 128k.
The only game i play is Elder Scrolls IV: Oblivion and with those large .bsa files i would think a larger stripe size would offer the best performance.
larger stripe size will do better when dealing with mostly large files...if it were me I would match the stripe and cluster...128/128
I would HIGHLY recomend that you read this entire thread...this was all benched and tested a long time ago:
http://forums.pcper.com/showthread.php?t=267729
also read this:
http://faq.storagereview.com/tiki-in...age=StripeSize
So far, my burst speeds have doubled but my sequential read speeds are pretty much the same. Ironic as everyone pointed out that my speeds were bus limited and a hardware controller would free them up. I went as far as to mod my ultra-d to sli as to enable x8 on the second pci-e slot as opposed to x2 in ultra mode to eliminate bus saturation issues. I'm going to try different stripe sizes and maybe even raid 5. If I can switch to raid 5 without significant changes in speeds maybe I will fo that route. My only assumption is that first gen raptors are no match for current sata II drives with high densities and they are no match for current gen raptors. i have tried enabling and disbling TCQ, setting write-back vs write thru, and minor differences, i will try different stripe/block as I was at 16/4, before and after. Regardless, burst has doubled, cpu util is still near zero on both, RA is same, slight improvement with areca both on same bench. I will also test write speeds and test with other benchmarks and raw copy:fact:
For real world performance i would suggest a bench like Winbench99 instead of HD Tach. Huge scores in HD Tach doesn't exactly mean the best performance in real world applications.
QFT....IOMeter is another good program to test true HDD performance....Quote:
Originally Posted by burningrave101
Thanks Grinch for trying to help me. istm this George fellow has been active in something else lately so i prolly won't bother him.
Been playing around my problem some more now. It is not directly related to raid. I get the same behavior with a single drive too. And with the Sil3114 controller. It is not about cpu speed as such either...decreasing multiplier has no effect. So HTT gives the main contribution (not surprising) though mem multiplier seems to have some effect too.
So i boot with high speed to win, decrease HTT with ClockGen and my HDD speed gets a big increase. Got 9k PCMark05. Was enough to get ninth place in ORB. And with 4400+ @ 1.7GHz. :D
Tells one or two things about this mark...
try running Atto
using an older version of Atto that only did 32MB total length this is what I got with my 1210 and 4x500GB seagates 7200.9s
now 32MB total length with the 1230 256MB cache. I will do an update of atto when I get the 1GB of ram to use as the cache.