iometer doesnt correspond to overall system performance
whats the point.. you guys dont believe me if i tell you that just a few hundred iops beat the crap out of thousands of iops![]()
X25-E 32GB x4
X25-E 60 GB x2
OCZ Vertex Ex 60GBx2
I would rather spend my money on something else
iometer doesnt correspond to overall system performance
whats the point.. you guys dont believe me if i tell you that just a few hundred iops beat the crap out of thousands of iops![]()
That is what I thought as it explains the benchmark results, so the first run is based purely off the drive's performance and the second run can be read the cache on the controller up to 1gb for example.
Let's say you open photoshop, then open a 1gb file, then close photoshop.
When you then go to open photoshop again does that mean that the performance is dependant on the drives again and not the cache?
You must config Iometer
Try the workstation-pattern with 64 outstanding IOs (moderate load) or/and 256 outstanding IOs (heavy load)
Here you can find 4 patternīs http://www.bigupload.com/code.php?code=T8ECPRXWFH
And here the manual for a pattern http://ixbtlabs.com/articles/hddide2k1feb/iometer.html
Do you try the FC-Test? Is a real writing/copying-test with real files!
Example: (itīs not a software-ramdisk!!!)
NOTE: Do run every pattern (5 patternīs) for one time!
create install - 0,796 s./ 724MB/s.
create iso - 2,013 s./ 818MB/s.
create mp3 - 1,389 s./ 736MB/s.
create prog - 4,4 s./ 323MB/s.
create win - 3,806 s./ 288MB/s.
When your last execution is a 1GB-Pic, your system is loading it from cache.
Superfetching of vista or win7 will helps too.
Loading of 25 apps in 8 seconds. Sorry for the bad quality. http://www.xtremesystems.org/forums/...2&postcount=72
Last edited by F.E.A.R.; 05-18-2009 at 02:45 PM.
And now IOmeter![]()
for the sake of your ramdrive i will not![]()
Hehe.
Iīm waiting for PRAM-SSD, but they will not released in 2009
btw:
Do you have single-drive-results of your PX-SSDs (only onboard).
I donīt know about the PX-Performance.
In europe we canīt buy the Supertalent PX. (i donīt know why)
Last edited by F.E.A.R.; 05-18-2009 at 03:04 PM.
i only single drive benched (hdtach) when i found the diff between serial # R and P
well you can bet that without 1231 it wouldnt be pretty
Last edited by NapalmV5; 05-18-2009 at 03:24 PM.
@ 1231
px Rxxxx = 148mb/s 0.2ms
px Pxxxx = 188mb/s 0.1ms
i got 4x px Pxxxx
PX #Rxxxx old and #Pxxx new?
probably.. and firm update didnt do anything to Rxxxx still 0.2ms
Hm...
The PX #Rxxx looks like a MLC (accesstime)
Interesting debate. Would it be fair to say this for an Intel SSD/ hard raid set up?
Intel drives ramp up quickly to reach peak write/ read transfer rate efficiency at around 8K/16k, which then levels out.
Hard raid is not used to targeting high-speed random data in small chunks, consequently the peak write/ read transfer rate efficiency is pushed out to 64k before it peaks and levels out, however the cache covers this shortfall in performance of the ssd.
In other words hard raid lowers peak write/ read transfer rate efficiency at typical OS workloads, but the cache makes up this shortfall and provides a positive advantage over 64k transfer rates.
yeh at first i thought i got mlc at the price of slc.. but the write performance is higher than that of their ox ssd (mlc)
let me make it clear.. "it wouldnt be pretty without 1231" i was not referring to the cache.. whether its 512mb or 2gb cache the 4x ssd perform @ max
Hereīs a Stripping of 2 Arecaīs (ARC-1261D-ML + ARC-1210) with Software-Raid (Vista x64) and 1 Acard 9010 on one of the Arecaīs.
The right pic shows 2x Acard 9010 with 1 Areca (ARC-1261D-ML)
Workstation-Pattern with 256 outstanding IOs (heavy load)
2x Acard 9010 with 1 Areca (ARC-1261D-ML)
![]()
forget iometer.. let me see fctests side by side
Ok, the ARC-1210 limits this array. Itīs not faster than 2x Acard @ the ARC-1261.
I must use 2x ARC-1261 (or 1231/1280)
And LVM2 with linux is the better choice. Linux has more hardware-proximity (is this the right word?) to the Arecaīs than Vista
so much for iometer..
exactly.. i saw your other post.. 1x 1261/ramdrive is still faster than soft/multi controller
you need 2x or more of the same controller to get smtg worthy in return @ soft raid
Napalm...please bare with me
The benchmarks below are flawed as a direct comparison, different IOmeter tests and the intel benchmark is based on the M (lol).....but I think the M & E are similar in the way they ramp up quickly to reach peak write/ read transfer rate efficiency before they level out. You can see on the Intel chart the M peaks at 4K before it evens out at around 75 mb/s, roughly the same value for the Arc benchmark. 2kb on the other hand is about 50% slower on the Arc benchmark, but anything above 4kb and the Arc blows the M out of the water.
The E’s on Arc are also not peaking until between 32kb & 128kb depending on the array configuration .......so the question I am asking is this; is hard raid a bottleneck for SSD on small reads/writes, which is an area that has never before been a target area of optimisation on hard raid because of HDD limitations?
Edit: Arc 1231 IOmeter settings: (Taken from here )
Iometer Settings
# of Outstanding
I/Os 16 per target
Burst Length 16 I/Os
Volume Size 10 GB
Intel X25-M IOmeter settings
Queue depth 32 across 10% disk span
Last edited by Ao1; 05-19-2009 at 01:50 PM.
^ i asked areca back in 2006 about that.. its gonna take whole lot of us to convince them..
@ audienceofone - sorry about that
well i see 5mb diff between 1x x25 and 5x x25 @ 4k
if 1231 would be a bottleneck @ 4k then 5x x25 should be 5mb under 1x x25
x25 controller is programed for very rapid rampup
1231 controller is programmed for a more linear rampup
if areca programs the 1231 for rapid rampup like that.. 2k would be no different than 4k
now.. is it just programing? is it also capability of the controller? idk im not the maker these controllers..
one things for sure.. cache definitely helps the controller at being more efficient.. the same way cache helps a cpu at being more efficient
how much cache? ultimately it comes down to the apps that run on these controllers.. some prefer more cache some less cache
heres the problem.. if efficiency prefers less cache then apps that prefer large cache get hurt.. same way large cache hurts small cache apps
it comes down to finding a balance to win/win situation
Last edited by NapalmV5; 05-19-2009 at 09:12 PM.
@ Napalm
What do you think?
Is a ARC-1680ix with the newest firmware the better choice (better than ARC-1231/1261/1280)?
Iīm looking for tests with the 1680ix + new FW + SSDs
But look (info from Arecaīs IOP348-controllers)...
IOP348-controllers simulate SATA with software?For SAS controllers, SATA drives is not a first priority design purpose. It uses the software to simulate the SATA protocol. The compatibility between SAS and SATA is keeping on update now. For example, if you are using the newest Seagate 1TB drive, you have to use the newest drive firmware SN05 for better compatibility with Intel SAS processors. For some other SATA drives, disable SATA NCQ feature in controller setting is better choice to avoid drive abnormal failure. In other words, although most of these compatibility issues between SATA and SAS has solutions now, but Intel and disk vendor are co-work to support fully compatible.![]()
Last edited by F.E.A.R.; 05-20-2009 at 01:07 AM.
Bookmarks