You running in winXP? seems that the marvel controler runs better in vista than XP...
I'm running in XP and hdtune burstrate also same with yours... but when i run in vista it becomes much higher scores... around 3k on HDDtach n 1xx on HDtune..
Printable View
Vista 64
Hi there guys, I've just upgraded my disk system, so I thought I'd include a few screen shots :) Its IBM's 73Gb 15k SAS drives, 6 in Raid 0 on a Adaptec 5805 card :) I'm trying to find two more to make 8 :D
Anyways, here's some results:
PC Mark 05 HD Tests (Might try Vantage a bit later)
http://img27.imageshack.us/img27/7619/pcmark05hdd.jpg
CrystalDiskMark 2.2 (100Mb)
http://img27.imageshack.us/img27/562...rk22jpg100.jpg
HD Tune Write
http://img23.imageshack.us/img23/74/6xhdtunewrite.jpg
HD Tune Random Write
http://img25.imageshack.us/img25/534...andomwrite.jpg
HD Tune File Benchmark
http://img27.imageshack.us/img27/447...ebenchmark.jpg
I've done a few more if you'd like them posted as well :)
@ phill
Ok, now try CrystalDiskMark with 1000MB ;)
velocirapor 300gb
Linear read test Areca 1680ix + 5 Vertex ssd
More ssd drives did not help :(
http://i413.photobucket.com/albums/p...ssdeverest.jpg
Would you mind trying testing with the specs in the Intel IOmeter thread? I'm thinking you might be pleasantly surprised even if your peak numbers don't go up with more than five disks in this test, assuming the limitation is peak bandwidth and not amount of IOPs handled by a even bigger array or MB/sec of different queue depths @ 4k random writes.
The link to the thread I'm talking about
http://www.xtremesystems.org/forums/...d.php?t=167857
I would personally love to see if the BW cap means anything at all in situations where you probably aren't bandwidth limited at all.
No bench - itīs real! :toast:
http://www.xtremesystems.org/forums/...2&postcount=72
Here's a raid5 I just set up for storing files/backup images and digitizing my dvd/cd collection ,
areca 1222 and 8x WD 500gb Green Power drives.
http://i43.tinypic.com/288caj8.jpg
Also couple intel X25-M SSD's running on the onboard ich10 of my gaming rig. (no room for raid card w/ my tri-sli :( )
http://i42.tinypic.com/24oxcia.png
Summary: OCZ Vertexes in RAID-0 outperform WD Raptors in RAID-0 by a factor of four.
Both RAID block sizes are set to 128k, because the X38 onboard RAID controller doesn't allow larger sizes.
Pay close attention to the scale of the graphs. They are very different.
My first test was using HD Tach in compatibility mode. I like HD Tach, but it made me nervous to use it in compatibility mode.
Vertex Results:
http://i228.photobucket.com/albums/e...7/Vertexes.jpg
Raptor Results:
http://i228.photobucket.com/albums/e...77/Raptors.jpg
Then, I ran an Everest "read suite" test on both:
http://i228.photobucket.com/albums/e...tReadSuite.jpg
Finally, I did an HD Tune on both volumes using the default 64k chunk size (let me know if any of you want different chunk sizes tested; these can dramatically affect performance in RAID-0)
Vertex Results (notice the access times pegged at the bottom):
http://i228.photobucket.com/albums/e...4KVertexes.jpg
Raptor Results (not sure about the down-spike):
http://i228.photobucket.com/albums/e...64KRaptors.jpg
Finally, you can see that all of Vista with drivers installed leaves plenty of room for other stuff on the SSDs. Note that the Raptors are currently empty (perfect place for program files) and that I have a networked drive for all media/backup stuff in case my RAIDs die. Also, note the 80GB drive (e:\), whose sole function is to handle the pagefile.
http://i228.photobucket.com/albums/e...Capture-49.jpg
New benchie with 512 byte-size blocks in HD Tune:
Vertexes:
http://i228.photobucket.com/albums/e...512kblocks.jpg
Raptors:
http://i228.photobucket.com/albums/e...512kblocks.jpg
6 500GB Seagates 500GB partitions and a Highpoint 4320
Partition #1
http://i41.tinypic.com/2dgwax1.jpg
Partition #2
http://i43.tinypic.com/33jtgrp.jpg
2x Acard 9010 (Hyperdrive 5) @ ARC-1261D-ML (2GB ECC-Ram)
http://www.abload.de/img/hc_585j50g.jpg http://www.abload.de/img/hc_586q787.jpg
http://www.abload.de/img/hc_58722jc.jpg http://www.abload.de/img/hc_588j7rb.jpg
http://www.abload.de/img/hc_59053mp.jpg http://www.abload.de/img/hc_591w6sb.jpg
http://www.abload.de/img/hc_589y5z6.jpg http://www.abload.de/img/hc_5928l5g.jpg
I think, this is my maximum...
Left with 128k Stripesize - right mit 4k Stripesize...
http://www.abload.de/img/hc_594o3tf.jpg http://www.abload.de/img/hc_5988337.jpg
Both with 128k Stripes...
http://www.abload.de/img/hc_595d0n0.jpg http://www.abload.de/img/hc_59698qp.jpg
With 4k Stripes and Workstation-Pattern...
http://www.abload.de/img/hc_597q3mc.jpg
@F.E.A.R: That was ridiculous! You keep posting scores like that nobody will dare to post their benchies! :p:
Here's my slowass 4x VelociRaptor array (raid0) on a Areca 1100 (PCI-x)
http://img408.imageshack.us/img408/4...1100042509.jpg
New results from the last time. I added two more Fujitsu MBA3147RCs and made a RAID-10.
4 x Fujitsu MBA3147RC 147GB 15K RPM SAS in RAID-10
Dell/LSI PERC5/i PCIe SAS Controller
http://www.pcrpg.org/pics/computer/4...c5i_raid10.png
Here's a picture. Two of them fit in a single 3.5" bay.
http://i228.photobucket.com/albums/e..._1024x768_.jpg
Single OCZ Vertex 120GB on SB750, trusty native IDE mode
http://img98.imageshack.us/img98/7613/ssdattocryst.jpg
http://img98.imageshack.us/img98/6216/hdtackssd.jpg
sorry about covering up access time graph on one of them!
2x RAID0 Seagate Cheetah 146GB 15K.5 SAS @ Adaptec ICP5805BL 256MB cache, 256KB stripe, 8MB zones
http://img211.imageshack.us/img211/5491/99956515.jpg
3x RAID0 Seagate Barracuda 500GB 7200.11 SD1A (x2), 7200.12 (x1), quarter-stroked, 128KB stripe, 8MB zones
http://img259.imageshack.us/img259/162/intelraid1.jpg
My first post! :D
6x WD640 Black in raid0 With intel onboard raid controller
http://img21.imageshack.us/img21/922...meterintel.png
http://img21.imageshack.us/img21/176...ccesreadin.png
http://img21.imageshack.us/img21/627...cceswritei.png
http://img21.imageshack.us/img21/117...writeintel.png
http://img21.imageshack.us/img21/846...0readintel.png
Mother Foxconn Blood Rage
4x Velociraptors in raid0 with Areca 1210SA
http://img21.imageshack.us/img21/295...meterareca.png
http://img21.imageshack.us/img21/130...ccesreadar.png
http://img21.imageshack.us/img21/274...cceswritea.png
http://img12.imageshack.us/img12/274...cceswritea.png
http://img21.imageshack.us/img21/383...writeareca.png
http://img21.imageshack.us/img21/194...0readareca.png
:shocked:
4*MAXTOR STM3160813AS= 120$
=http://img198.imageshack.us/img198/3707/bench.png
:D
Here's my raptor and RAID0 results.
http://i44.tinypic.com/2hyhzr6.png
Cheers
Here a software-raid (2x HD5) with Vista x64
http://img2.abload.de/img/softoqqe.jpg
IOmeter/Workstation with 256 outstanding IOs (means heavy load)
http://www.abload.de/thumb/soft2xwtt.jpg http://www.abload.de/img/soft4w0v5.jpg
1st post Raid 0 2x Samsung SpinPoint P120S 250GB
So i did this. Changing the PCI starting at 100, 106, 108, 109, 100 and 112
I would think from the results that 110 is better,
PCI at 100
http://img42.imageshack.us/img42/183/100pci.png
By neo_rtr at 2009-05-21
PCI at 106
http://img38.imageshack.us/img38/2108/106pci.png
By neo_rtr at 2009-05-21
PCI at 108
http://img38.imageshack.us/img38/2235/108pci.png
By neo_rtr at 2009-05-21
PCI at 109
http://img265.imageshack.us/img265/8333/109pci.png
By neo_rtr at 2009-05-21
PCI at 110
http://img38.imageshack.us/img38/671/110pci.png
By neo_rtr at 2009-05-21
PCI at 112
http://img38.imageshack.us/img38/7338/112pci.png
By neo_rtr at 2009-05-21