Here are two laptop drives I'm testing.
Printable View
Here are two laptop drives I'm testing.
Here's my 2 WD caviar RE 250 mb drives in raid 0 on the nvidia controller from the 680i board.
http://i3.tinypic.com/6ksg4kp.jpg
http://i3.tinypic.com/7wt0ack.jpg
http://img252.imageshack.us/img252/299/hddtuneks9.png
2x WD 36.7 Raptor raid0, crappy Silicon image3112 controller with 16k strips
LANPARTY UT RDX200 CF-DR
nope its a ati chipset, its running on the Silicon image3112 controller
my 2 * 74gb raptors
http://www.brittech.co.uk/hdtune101107.gif
http://www.brittech.co.uk/hdtach101107.gif
Those raptors are nice..
I have some old 36gb raptors.. should throw those in here and see how they perform after 4 years...
Attachment 68117
Western Digital Scorpio WD2500BEVS 250GB
2x125GB platters
8MB Cache
5400RPM
2.5"
http://www.storagereview.com/WD2500BEVS.sr
http://www.hothardware.com/Articles/...inch_SATA_HDD/
Definetely not bad for a notebook hard drive. It's fast because of its size (platter density) despite being only 5400 RPM. Access time could definetely use some help though.
Seagate 7200.11 500GB http://www.seagate.com/ww/v/index.js...D&locale=en-US
the pc is folding, ignore the cpu 100% rading.
http://img.photobucket.com/albums/v4...udson/1-23.jpg http://img.photobucket.com/albums/v4...udson/2-11.jpg
Wow now that is impressive. Quite a bit more performance they squeezed out of those drives since the 10 series. My 320GB 7200.10 only did 60-65MB/s average.
4 36gb 16mb cache raptors raid 0 w/either a 16 or 32kb stripe (can't remember what I decided on)
It's a camera shot because thats the new comp I builts and it isnt connected to the internet yet.
http://i214.photobucket.com/albums/c...e/DSC02798.jpg
single raptor 150
Attachment 68542
2 raptor 150 raid 0
Attachment 68543
2 seagate 500 7200.10 raid 0
Attachment 68544
I don't have images to upload but I ran my new Samsung 1TB F1 drive and it read 92.4MB/s in the short/fast HDTach test and 94MB/s in the longer test. That's PHENOMINAL! This is the new drive to beat. Go to Tom's Hardware and check out the hard drive charts. This thing destoys the competition.
(I'll work on the images to prove it)
2x 320GB Samsung 16MB cache RAID0 nforce 590 chipset
3x Samsung 320GB 16MB cache RAID0 ICH9R - below sig
This is from my webserver, 2x320gb Western Digitals in Raid 1 on a Promise Fastrack TX2300 raid controller.
http://www.needmoreboost.com/wes/hdtach.jpg
This is from my 2x150gb Raptor Raid 0 array on the Intel ICH7R.
Think something is going on.
http://www.needmoreboost.com/wes/hdtach_before.jpg
http://www.needmoreboost.com/wes/hdtune_before.jpg
^ lol neat pattern there
u should get higher bursts and reads (~150MB)
Hey wes, what part of NW ohio you in? I run a Computer Store in Napoleon, OH....
6 X 150GB Raptor 16MB cache Intel ICH9R RAID0 64 stripe:
http://myalbumbank.com/albums/userpi...t8tightest.JPG
http://myalbumbank.com/albums/userpi.../hdtune_wc.JPG
http://img.photobucket.com/albums/v2...913/hdtune.jpg
http://img.photobucket.com/albums/v2...913/hdtach.jpg
very average while running iTunes and Firefox and no defrag. Good enough for me since I can't figure out how to set up a raid array for my life. Some one please help me.
Where are u having problems?
http://www.dawnitgirl.com/pic/pass1.gif
10 Raptor RAID0 scratch disk on Areca 1261ML with 2GB cache.
Maxtor 250GB sata 8mb
http://img504.imageshack.us/img504/4...axtor7yyy8.png
A quick bench of hd tach RW
http://img137.imageshack.us/img137/4264/76862637ie3.jpg
Are these benches ok for my hard drive?
http://img69.imageshack.us/img69/7475/83728723uf2.png
How can I activate UDMA mode 6 (Ultra ATA/133)?
Any suggestions would be much appreciated.
Rubycon .. v.nice array, shows what a real controller can do. :)
A quick run using:
6x Drives Seagate 16Mb cache RAID-0 64K stripe
ICH9R Matrix Vista x64 WBC Enabled
Abit IP35 Pro
Board: MSI K9A2 Platinum
*edit*
using XP x64 edition
single Raptor on ATI SB600:
http://apokalipse.googlepages.com/hdtuneraptor.PNG
single 750GB Seagate 7200.10 on SB600:
http://apokalipse.googlepages.com/hd...0gb7200.10.PNG
single 750GB Seagate 7200.10 on Promise T3:
http://apokalipse.googlepages.com/hd....10promise.PNG
Promise controller gives faster throughput, but higher access time.
Q6600 @ 2400 Ghz default 9x266, mem @800MHz default, everything @ default
3x500GB Seagate 7200.10 SATA RAID 0
http://img148.imageshack.us/img148/1...st2longmr1.jpg
3x500GB Seagate 7200.10 SATA RAID 5
http://img214.imageshack.us/img214/1...00test1kp2.jpg
300gb Seagate 7200.10 ATA is a fourth drive on the Gigabyte controlller
http://img214.imageshack.us/img214/9...00test1ku1.jpg
http://img214.imageshack.us/img214/8...atrixsmcc1.jpg
http://img529.imageshack.us/img529/6...atrixsmgi2.jpg
http://img529.imageshack.us/img529/2...matrixsrr8.jpg
3 x 500GB on a Intel Matrix Storage Manager...
120GB RAID 0 stripe 128kb and the rest
850GB RAID 5 stripe 128kb
The write performance is not so good on the RAID 5 with 12-20Mb/s for a copy big file from RAID 0 or the 300GB ATA
Maximus Formula Bios 907
2x160GB WD1600YD in Raid0
Here's a quick run, this is 2 WD 7200rpm 16mb cache 250gb drives in raid 0 :)
My rig is: Asus P5B Deluxe, E6750 @ 3.6 w/c, 2gb Ballistix 6400, Evga 8800gts 640mb, OCZ 700w PSU. :cool:
Specs are in sig, running on the Matrix RAID with my utter crap ICH7R southbridge. 32k stripe size.
The document Ive attached is the output file from a program stevecs hooked me up with called IOZone. Awesome program if youve got the 10 hours to run a benchmark with it. :up: steve.
The document has both MS Excel tables and charts - click the tabs in bottom left to scroll through charts. Its a much more thorough test to determine RAID performance. If anyone wants to run IOZone or needs help with it, shoot me a PM and I can set you up generally with it and do my best to help. :yepp:
3x 250GB Seagate NCQ 16MB buffer (STRIPE)
Write Cache DISABLED:
http://home.pages.at/virgilioborges/... disabled).png
Write Cache ENABLED:
http://home.pages.at/virgilioborges/...c enabled).png
http://home.pages.at/virgilioborges/...tel_matrix.png
Windows Vista Ultimate x86
http://www.dawnitgirl.com/pic/ARC1680.gif
Test size of 50MB means it runs to and from the 2GB cache which is on 8X PCI-E. :D
EDIT: Different every time it's run. Vista is ALWAYS messing with the damn disks even with indexing OFF. grrr
Can anyone please tell me of a good HD stress test utility or a good error checking utility, I had to manually install some RAID drivers because my floppy disk corrupted during install, now windows is behaving badly and im getting BSOD's.
Thank you for any replies.
Maxtor 250Gb Sata 2 16Mb
http://img252.imageshack.us/img252/373/250cu6.jpg
Maxtor 80Gb Ata 133
http://img262.imageshack.us/img262/6514/80up0.jpg
Perhaps a little off-topic, but as I'm facing the decision to get a new system and I have a pretty high-end disk array, I'm wondering what the performance would be using a 'normal' motherboard?
My current workstation (dual opteron 246), has an Areca 1280ML with 16x WD5000AAKS on it and 2GB cache. But as these processors clearly need an upgrade I'm thinking of buying either a fast Core 2 Quad or a (most likely less fast because of the price) Xeon system. So I'm wondering, has anyone around here have experience with a regular desktop system running a high-end disk setup? I haven't found anyone with a similar setup in this topic.
Since workstation motherboards don't offer any overclocking possibilities and are a lot more expensive I find a Core 2 Quad a nice option, if my raid card will still be used to the maximum. I really like the high disk performance I have right now.
PS: I only run Linux and Unix, so some hardware might not be completely compatible/possible.
HDTach doesnt measure burst right.
With a setup like that you should be getting about 700MB/s, are you getting results like that?
http://tweakers.net/ext/benchchart/chart/11/
Cause I'm afraid a regular motherboard with a single bus might not be able to handle the performance, with my current dual system I can easily flood the entire bus from CPU 1 and still have CPU 2 completely available for me. Although that wouldn't be the case with a Xeon I guess since they share the memory and bus anyway.
He was talking about hdtune I suppose, since the screenshot above my post is one of hdtune :)
Im curious as to what stevecs has to say about your question... he seems to be quite the RAID guru around here.
Actual STR reaches 800MB/S from the ten disk array. About 1.9GB/S from the cache.
HD Tach is not measuring burst correctly on some chipsets. No way in hell is hostraid doing 4GB/S as pictured in some people's posts. Obviously that is an anomaly. Raw file copy performance when cutting a/v data is what is needed and it seems to scale nice on 800MHz IOP 341's.
HDD : 1 x Hitachi 7200rpm 160GB 8mb cache sata2
Controller : AMD SB600 ;)
Still waiting on my second 150 raptor.
If you're not going to change your disk subsystem (ie, type of drives, or separate your work flows to different spindle groups et al) you're not going to see much at all of an improvement. Basically your computer MB/cpu, et al is generally NOT the bottleneck. It's mainly the drive subsystem that kills you (at least first, there are numerous bottlenecks in a system but that's a much bigger discussion a decent reference would be John Hennessy's/David Patterson's book "computer architecture: a quantitative approach")
Before anyone can really point you in a direction a lot more information would be needed to model your type of access requirements and setup. I've just posted in the storage/raid thread some spreadsheets to do some quantitative analysis but they are only a start.
http://www.xtremesystems.org/forums/...=150176&page=2
Really comes down to is what is your workload type, and what performance level are you seeking. From that then you design the system to match. There's also the issue of disk subsystem utilization curves which come into play (Queuing theory & Markov models like the M/M/1 or M/M/m).
My concern is mostly based on previous systems where the raid array would easily do 100MB/s and more but when transferring via the network the PCI bus was immediately completely saturated and because of that limiting the network transfers to 60MB/s. Because of the PCI-E lanes these restrictions obviously no longer apply (the shared PCI bus I mean), but since I haven't had the chance of testing a setup like this with a 'normal' motherboard I wasn't sure what to expect.
Fantastic information, this is something I can work with. Although the Queueing theory and Markov models usually only come into play when using higher loads, it's still a very interesting read, thank you very much.Quote:
Before anyone can really point you in a direction a lot more information would be needed to model your type of access requirements and setup. I've just posted in the storage/raid thread some spreadsheets to do some quantitative analysis but they are only a start.
http://www.xtremesystems.org/forums/...=150176&page=2
Really comes down to is what is your workload type, and what performance level are you seeking. From that then you design the system to match. There's also the issue of disk subsystem utilization curves which come into play (Queuing theory & Markov models like the M/M/1 or M/M/m).
As for my workload type, it's mostly transferring big files over the network (big as in 100MiB and more) to about 10 clients simultaneously so the queueing part will be quite simple and will most likely not give problems. My performance concerns are primarily related to the hardware limitations of the system.
It looks like I have enough information to start a proper analysis, perhaps I'll try calculate the Markov chain for it, it could give me some new insights.
Correct on those models require a system in equilibrium and the number of incoming requests is unlimited among other items. but it sounds like you know that.
As for network transferring of large files another item that I've discovered recently is that switches are much more the bottleneck than normally thought for transferring large amounts of data. I've run into some tests with cisco switches (3750's, 3xxxx, 4xxxx et al (haven't tried the various 65xx's yet) that they cannot push full GbE speeds on their nics. They do generally OK in one direction but not in both (send/receive) looks like they are capped at 1GbE total per nic (ie one direction).
Then there is network filesystem overhead (like SMB/CIFS) which just suck at high speed performance (usually capped at around 60MiB/s or so per client connection).
Yes, I have some knowledge about queueing theory. As I'm currently programming with distributed systems I know a bit about multithreading and proper/fast ways to handle queues.
As for network limitations, that is one of the reasons I'm not a big fan of Cisco equipment, I'll take Foundry or Juniper over Cisco any day. Or, if money is a big issue, HP switches have been performing quite good for me aswell, 100-110MiB/s in both directions seems to be possible without the switch giving the least bit of problems (HP ProCurve 2824 or 2848 switches with 24 or 48 1GbE connections). The network won't be the limit here, perhaps the specific network cards though, as the onboard nic of my K8WE only does 75-80MiB/s while the 3c985 fiber channel nic does 120MiB/s easily.
And yes... network filesystems, I'm currently using NFS but I'm indeed not too happy about the performance. Perhaps I should consider switching to a virtual iSCSI system or ATA over Ethernet. But the last time I've tried ATA over Ethernet it was a bit unstable, it caused the kernel to lock up once in a while (and crashes in combination with ATA over Ethernet result in filesystem corruptions) so I'm not that eager to try it again.
Either way... I'm gonna look for a new motherboard with atleast 1x PCI-E 16x for the videocard, 1x PCI-E 8x for the raid card, and preferably PCI-X for the network cards (PCI-E fiber cards are still quite expensive at the moment, and I already have a load of PCI-X network cards lying around), if anyone has a tip I'm open for suggestions ;)
I'm currently looking into Gigabyte but perhaps I'll have a look at Asus, Tyan or SuperMicro aswell.
You may want to try NFS v4 if you're not already due to some of it's optimizations w/ command merging among other items. Bit more of a pain for clients/setup though. I haven't really tested that in a distributed environment yet though.
As for MB's I've used the asus ones (P5W64WS Pro) has 3 8x & 1 4x pci-e, (I've been trying to avoid pci-x as I'm looking more towards 10GbE and pci-X can't handle it). Main item though if you're looking for end-to-end you want an Intel based nic. I haven't found any of the other ones (broadcom, marvell, 3com, et al) able to push full wire speed in full duplex. For workstation classed boards both Tyan & Supermicro have been good in the past for me.
Good luck. and thanks for the tip on the juniper/foundary I'll check out some of those.
Thanks for the NFS v4 tip, I haven't tried switching to it yet. The total enviroment is not really distributed though, just some of the software I'm writing for work :)
10GbE sounds great to me, but it's still way to expensive imho, trunking a few gbit lines (dual or quad gbit connections) should be enough for now.
After your recommendation I've given Asus a good look and I think the P5E64WS Pro might be an option if I'd decide to switch to PCI-E, or the P5E3WS Pro if I'll stay with PCI-X. I'm looking at some Intel PRO/1000 PT dual port adapters right now, it seems they aren't too expensive. I'm actually considering a motherboard that would allow a little overclocking too, and I haven't found a Tyan/SuperMicro motherboard that allows that unfortunately (yeah, under windows with some tools, but I don't run windows ;)).
4 x 150GB Raptors in Raid0, 128KB stripe size, Areca ARC-1210
http://img187.imageshack.us/img187/3...recaarcod2.png
http://img296.imageshack.us/img296/7...recaarckf7.png
http://img259.imageshack.us/img259/8...hmarkarty3.png
Mien looks pretty good
As you can see I have a Raptor. The transfer and burst levels aren't spectacular but, the access times are.Attachment 72027
160Gb SATA @5400 rpm on my
32Mo
http://www.jmax-hardware.com/images/...0/hdtach32.jpg
8Mo
http://www.jmax-hardware.com/images/...00/hdtach8.jpg
Two Hitachi Deskstar 320gb (7200rpm, 16mb cache) RAID 0
Here is a Skulltrail bench with onboard RAID
Vista Ultimate 64 bit
4x 150GB RAID 0
http://fugger.netfirms.com/D5400XS/hdvista.jpg
I need to get that Areca card.
This what mine 4x Samsung F1 32mb 1TB does in RAID0
http://members.lycos.nl/niscoracing/...ng%20RAID0.png
Just a single Seagate 7200.10 and it beats a craptor :D
http://img166.imageshack.us/img166/9...hnj1tx1.th.jpg
Seagate 7200.11 500GB.
http://scoombes15.googlepages.com/seagate.jpg
Nisco, that's impressive.
Nisco, I was looking at those 32M drives, looking even better now.
Fugger
get in touch with the peeps @ fusion-io to get a sample
the iodrive would be a great addition to skulltrail
at least thats what im planning on..
4x Seagate 7200.10 500gb
ICH9R 0+1 64kb stripe Write Cache Enable
P5K-E Q6600 @3.2 2GB
Vista x64
http://aycu25.webshots.com/image/435...2757287_rs.jpg
http://aycu38.webshots.com/image/422...0098377_rs.jpg
That is not SSD !
That is :
http://xtreview.com/images/samsung-s...t-F1%20-01.jpg
2xSamsung 500GB 1xWD 500GB
Samsung SpinPoint T166 HD501LJ - 500GB
http://img517.imageshack.us/img517/3...ng500gbct2.jpg
http://img517.imageshack.us/img517/4...amsunghxo5.png
Samsung SpinPoint T166 HD501LJ - 500GB 2nd
http://img352.imageshack.us/img352/7...ng500gbjr2.jpg
http://img87.imageshack.us/img87/744...amsunghwa1.png
WD5000AAKS
http://img151.imageshack.us/img151/3...dcwd500lk4.png
http://img141.imageshack.us/img141/4...3102007yn5.jpg
All tested on same rig:
Asus p5k-e wifi
4x1gb Geil 1ghz@1130
2140@3ghz
Two 320GB 7200.10 Seagate drives. 100GB Intel Matrix partition.
http://i159.photobucket.com/albums/t...henator/hd.jpg
Maxtor 300GB | SATAII | 16MB
nothing too spectacular here:
31.7 MB/s average
0.7ms access time ;)
eee's internal ssd
hows mine?
250GB WD Cavair SATA 16MB Cache
http://img297.imageshack.us/img297/8062/hdddpe8.jpg
Here is what my two raptors on the new Maximus Formula got me.
Okay, so you see that you have a blue line that starts at around 62 and falls to around 34. This is the transfer time as the drive is tested from the inside of the platter to the outside. This is normal. The burst rate is the fastest that data can be transferred to and from the drive. This is usually aided by the controller/ the drive's cache/ drive speed. The little yellow dots show the time it took to do random access at various points across the drive platter with the average shown being 13.4ms. This is aided most by the drive's speed but the smaller the platter the better(your average would be much higher if you cut off the high points on the outer edge of the disk platter.) And lastly the cpu usage is... well it is stated as being: "The CPU usage shows how much CPU time (in %) the system needs to read data from the hard disk." But generally the lower the better, but for a single drive this won't be very large. The raid 5 drive setups will have high numbers here(unless using a controller with onboard processing) because calculations have to be done during data storage and access. It all looks fairly correct and normal based on what my 250 gig did.
I just wanted to drop in and say that I have gotten my whole system up and running with two RAID 0 Arrays: 2 74 GB Raptors (ones posted above) and 2 500 GB Seagate 7200.11s. Below shows the benchmarks and I thought that it was pretty interesting.
three raptors in RAID0
http://i8.photobucket.com/albums/a11...ixen/array.jpg
My little big Savvio 15k.1 on a crappy Promise TX2650, but kicking every 3,5" drive's ass regarding access time ;)
http://home.arcor.de/partywg2/Pics/15k1.hdtach.jpg
http://home.arcor.de/partywg2/Pics/15k1.hdtune.jpg
Will give her a real controller soon and see what she can do :)
For those that don't know, it's a 2,5" SAS 15000 rpm drive from Seagate, featuring 16MB Cache. Currently the fastest mechanical HDD when it comes to access and IOPS, which makes it the fastest HDD anywhere except for transfer rates.
But that's not the best yet - put in a Scythe Quiet drive, this baby can hardly be heard, way more silent than a normal 7200 rpm desktop drive. Only you need a decent SAS controller, and they aren't cheap.
4x WD740ADFD vs 4x WD6400AAKS @ HighPoint 3510
raptor
http://aycu03.webshots.com/image/451...6468814_rs.jpg
6400
http://aycu14.webshots.com/image/472...4210655_rs.jpg
raptor
http://aycu05.webshots.com/image/458...8349068_rs.jpg
6400
http://aycu30.webshots.com/image/450...8333287_rs.jpg
Areca 1680 SAS w/2GB cache; 6x Fujitsu MAX147GB RAID0 128K STRIPE
http://www.dawnitgirl.com/pic/1680fmax.gif
2x Mtron SSD RAID0 16Kb stripe size on Areca ARC-1231DML with 2Gb DDR2-533 Cache
http://i225.photobucket.com/albums/d...16K_HDTach.jpg
http://i225.photobucket.com/albums/d...16K_HDTune.jpg
The SAS and SSD benchies make me :eek:
Hehe, keep them coming.
ssd's get me hard,lucky man
Here are two of the new 32mb cache seagate drives:
A 750Gb
http://img80.imageshack.us/img80/915...e750me7.th.jpg
A 500Gb
http://img80.imageshack.us/img80/689...e500vv7.th.jpg
2x 150 Dell drives (WD) 7200 Rpm 8mb cache running raid 0, 128k stripe
http://img218.imageshack.us/img218/7426/hdth3.png
Seagate 320
http://img227.imageshack.us/img227/4339/testgz4.jpg
Hard drives perfoms better when Windows is clean.
The Mtron Solid State blows away everything... I only wish... How much did that cost you like $2500.00
2 x Raptor X 150 raid -0- using the onboard 680i eVGA
My two drives in RAID 0...does anyone have a Raptor firmware later then 00NLR5?
Is this normal guys ? :eek: Just bought a brand new Raptor X and i am getting those horrible graphs :( Tested my old 250Gb drive on the same rig without any problems (WD2500JS @ eVGA 680i SLI) . What do you think - shel i return it / RMA it ? :confused: Thx
http://img405.imageshack.us/img405/8054/raptorwq9.jpg
http://img443.imageshack.us/img443/8320/raptor2fm3.jpg
[QUOTE=Gorod;2871960]Is this normal guys ? :eek: Just bought a brand new Raptor X and i am getting those horrible graphs :( Tested my old 250Gb drive on the same rig without any problems (WD2500JS @ eVGA 680i SLI) . What do you think - shel i return it / RMA it ? :confused: Thx
Make sure you have NOTHING RUNNING IN THE BACK GROUND or you will get poor results... Do not even move your mouse while using the HD benching.
anyone running 3x 750GB or 1TB drives in RAID0 on an intel ICH9R chipset?
Sorry to hijack but i'm desperate - should i try to add the latest driver - F6 style while installing vista?
Though it still says the 3x 750GB drives are not bootable in the raids ROM.
I'm stumped
I'll post the results for the 2 drives in RAID0 tomorrow (I hope i'll have the 3 running by then!)
Looks like your mobo can't handle arrays over 2TB in size. Bios update?
I know mine can do it (DFI X38 T2R)
well i was using the 1004 BIOS (latest beta) so i went back to 1001 and still the same.
Though when i flashed it to 1004 - it killed 2 of the 3 320GB samsungs i had installed, hence the upgrade
Strange. There are Raid controllers that can't handle over 2TB array size, but the ICH9R isn't one of them afaik.
No i know Asus i think have screwed it up! 1004 killed 2x HDs! After flashing to it - bare in mind i'm a very long time user of asus mobos and have never killed anything while flashing a BIOS (except a DFI 875 Lanparty many years ago - and never used them again)
Anyway i've posted in the Maximus Extreme thread - so i hope to get some answers - but the way i feel at the moment - that board will be lucky to stay out of the bin! Don't buy one!