Been benching vid cards and chip a lot these past few days. Many bsods and hard restarts. Array held up just fine with 11 series drivers as opposed to those enterprise drivers...
Been benching vid cards and chip a lot these past few days. Many bsods and hard restarts. Array held up just fine with 11 series drivers as opposed to those enterprise drivers...
I've had a single 101 while OCing, no other issues on my board, very consistent performance.
I am using the Enterprise drivers, still using 2R0 m4 as boot drives.
-
Hardware:
new drivers from Intel
http://downloadcenter.intel.com/Sear...ord=%22rste%22
Already downloaded
Haven't noticed anything special yet, will have to do some more testing.
It does not look like TRIM is enabled for RAID yet. (not on the current OROM that is)
-
Hardware:
A few more benchmarks on the X79 using the latest official 3 series drivers. (3.0.1.7016)
There's no change vs the early drivers yet.
Intel Raid 0 Volume SCSI Disk Device_230GB_1GB-20120210-2310.png
as-ssd-bench Intel Raid 0 Vol 2.11.2012 12-25-37 AM.png as-ssd-bench Intel Raid 0 Vol 2.11.2012 12-25-51 AM.png
ATTO_2R0_830_128GB.PNG
-
Hardware:
Well, i don’t know if here is the correct place to post, but if not, please tell me.
I’ve observed a strange (at least for me) behavior on the OROM 3.0.0.1184 associated with bios 0604 until 1101 for RIVE (X79), as follows.
When creating a raid 0 volume on the orom, it says that my 2 128GB m4 ssd’s have 119.2 GB each, what is OK but the volume this way created is only 226.6 GB and this is the free space amount I can play with on OS.
On the contrary, when creating raid 0 with 10.6.0.1091 OROM on MIVE (P67), using the same SSD’s, the volume is 238.4 GB (2x each individual available space).
The difference between the volumes created by each OROM is about 11.8 GB a lot, I think, with respect to SSD’s.
Is the overhead taken by the “enterprise” bios and orom on the volume related to security structures perhaps?
Interestingly, I saw someone in this thread who has the UD7 X79 and 2 m4 in raid 0 (same I have on RIVE) and the AS SSD says the volume is 238.4 GB, while the space I can have is about the mentioned 226.6 GB.
Has anyone observed this or there is a bug in my system? If one knows about this could elaborate more on the subject? Thanks.
When you setup the array can you not change the number presented to 238.4 GB? I thought that was just what they recommend, I think I was able to change it to max out the array.
Thanks DooRules for the reply.
No, 226.6 GB is the max value i can set. I have tried many times, but with the enterprise bios is the max value.
I changed for the MIVE and the same 2xm4 r0 gives the 238.4 GB.
Sigh, the C600 RAID on the Rampage IV Extreme has been a major PITA to get working properly. Now it is stuck at SATA II speeds:
Running BIOS 2012, two 120 GB Vertex 3 Max IOPS connected with 6 GB/S cables (same drives and cables I just removed from Z68 board that was running over 1000MB/Sec Sequential Read so I know it is not the drives or cables), SSD just secure erased, plugged into 6 GB ports on MB, secondary/aftermarket SATA chip disabled, created 128k stripe in Raid 0, BIOS settings set to Raid, hot-plug disabled and SMART enabled, running Intel C600 RAID driver: 3.0.0.3011, everything under Intel RST interface is showing good, SSD Write caching enabled.
As far as I can tell everything is set up perfect and I should be getting SATA 3 speeds, not 2. So is there some super secret SATA 3 BIOS entry or something simple I am missing?
GPU: 4-Way SLI GTX Titan's (1202 MHz Core / 3724 MHz Mem) with EK water blocks and back-plates
CPU: 3960X - 5.2 GHz with Koolance 380i water block
MB: ASUS Rampage IV Extreme with EK full board water block
RAM: 16 GB 2400 MHz Team Group with Bitspower water blocks
DISPLAY: 3x 120Hz Portrait Perfect Motion Clarity 2D Lightboost Surround
SOUND: Asus Xonar Essence -One- USB DAC/AMP
PSU: EVGA SuperNOVA NEX1500
SSD: Raid 0 - Samsung 840 Pro's
BUILD THREAD: http://hardforum.com/showthread.php?t=1751610
Reading comprehension for the lose!
I had misread the following:
http://www.ocztechnologyforum.com/fo...l=1#post714084
I thought it said lock at SATA III but I locked them at SATA II when I secure erased. lol
GPU: 4-Way SLI GTX Titan's (1202 MHz Core / 3724 MHz Mem) with EK water blocks and back-plates
CPU: 3960X - 5.2 GHz with Koolance 380i water block
MB: ASUS Rampage IV Extreme with EK full board water block
RAM: 16 GB 2400 MHz Team Group with Bitspower water blocks
DISPLAY: 3x 120Hz Portrait Perfect Motion Clarity 2D Lightboost Surround
SOUND: Asus Xonar Essence -One- USB DAC/AMP
PSU: EVGA SuperNOVA NEX1500
SSD: Raid 0 - Samsung 840 Pro's
BUILD THREAD: http://hardforum.com/showthread.php?t=1751610
Interesting seeing such an option, not sure why there is need for that option.
The Force 3 used in the Endurance test sometimes downgraded to 3Gb/s by itself on one particular MB (ASRock Z68 Ex4), haven't seen anything like it on other SF 2281 based drives.
-
Hardware:
Hi all, I'm new here
I own a few months ago the Asus Rampage Formula IV and a Corsair Force GT 240GB.
I also noticed that the write performance is very bad in the 4K-64Thrd in writing.
Please take a look on my AS SSD Benchmarks (Anvil's Benchmark show the same results):
Writecache on in Windows 7 (4K-64Thrd Schreiben)
Writecache off in Windows 7 (4K-64Thrd Schreiben)
I have this issue with every Intel Rapid Storage Technology Driver. With the Microsoft standard AHCI-Driver are the benchmarks with and without write cache normal. The results can be reproduced. Using SSD or HDD!
Do you know why is that? Are AS SSD and Anvil's benchmark showing incorrect results?
Oh, and excuse my bad english, I'm swiss and I don't liked to learn english in school
Greets
bonidinimon
Last edited by bonidinimon; 04-19-2012 at 11:55 PM.
The latest release of CrystalDiskInfo (4.6.2) adds support for Intel RAID, as a bonus it also works on X79 using a workaround, haven't tried raid yet but standard drives work by selecting
Function->Advanced Feature->[Beta] Intel RAID (CSMI)->Enable (Access all disks)
-
Hardware:
I checked out raid and it worked, both drives are in a raid.
CDI_462_C300_RAIDED.PNG
Still, it does not show the drive-letter assigned to the raid, nor on any other drives. (it's still in beta)
I selected, show drives in raid only so the other non member drives aren't shown.
-
Hardware:
Hello,
I have an Asus Rampage 4 Extreme, and a Samsung 830 128GB, and like you,
the write performance that are not great with the latest RSTe and the boot is longer than the RST 11.x
is what is checked this box in the device manager to have results like the RST 11.x?
or is what can be modded the RST 11.x and install them without inversion message?
thanks
Sorry for my english
LL A77F - Asus Rampage V Extreme - 5930K - Corsair Platinum 4x4Go 3000 C15 - Zotac 770 - SSD Samsung 850 Pro & 830
Pump D5 with mod Bitspower - EK Supremacy - Koolance GTX680 - HW Labs SR1 280 & EK XTX 360
I tested with an Intel X25-E (check "Turn off Windows write-cache buffer flushing ...") and Windows boot is still too long (I test this with the Samsung 830)
it does not work with all SSD?
And all SSD are not impacted by this "problem"?
some do not have this problem with M4 (problem solved with the last update of the M4?)
And also, I do not install the RSTe by .exe, but I install the driver by Device Manager (iaAHCI.inf), no problem?
thanks for your help
Sorry for my english
LL A77F - Asus Rampage V Extreme - 5930K - Corsair Platinum 4x4Go 3000 C15 - Zotac 770 - SSD Samsung 850 Pro & 830
Pump D5 with mod Bitspower - EK Supremacy - Koolance GTX680 - HW Labs SR1 280 & EK XTX 360
On my board I need to change wcb flushing on all drives, and I've tested most SSDs.
You should get results that are close to the 11.x drivers.
If booting is slow it could be driver related or in some cases it can be the SSD that slows down the process.
I've have seen the issue (resembled a timeout issue) but can't recall what drive it was.
It could very well have been the m4's pre fw update.
i'm currently booting off an Plextor M2P, no issues at all, I always install the RSTe drivers on the X79 and I've always used the 3.x driver.
Edit:
I frequently reinstall as I try different SSDs, normally once a month or so.
Last edited by Anvil; 05-01-2012 at 08:58 AM.
-
Hardware:
ok thank you
I tested RST 11.x on my board (I forced the install) and the boot of windows is fast (as with my X58 platform) with RSte 3.x, the boot is longer ...
Same with a Samsung 830 (but I have not tried with the change in the device manager)
no problem to install driver by the device manager (and therefore, I do not have the software RSTe)
thanks
Sorry for my english
LL A77F - Asus Rampage V Extreme - 5930K - Corsair Platinum 4x4Go 3000 C15 - Zotac 770 - SSD Samsung 850 Pro & 830
Pump D5 with mod Bitspower - EK Supremacy - Koolance GTX680 - HW Labs SR1 280 & EK XTX 360
I'm using a couple of RAID controllers/HBAs so my boot process is slow no matter what I do.
The controllers alone add 30-40 seconds or so, so I don't reboot more often than I need to.
What is great is that the board handles 2 storage controllers.
I believe I have tried the 830 both as single and in raid on the MB w/o any slowdowns, will retry this weekend.
-
Hardware:
thanks for your help with the 830
Sorry for my english
LL A77F - Asus Rampage V Extreme - 5930K - Corsair Platinum 4x4Go 3000 C15 - Zotac 770 - SSD Samsung 850 Pro & 830
Pump D5 with mod Bitspower - EK Supremacy - Koolance GTX680 - HW Labs SR1 280 & EK XTX 360
I noticed that with the RSTe in HD Tune, Info tab, this is indicated
Active: UDMA Mode 5
before (with X58) it was UDMA mode 6
normal or not?
Sorry for my english
LL A77F - Asus Rampage V Extreme - 5930K - Corsair Platinum 4x4Go 3000 C15 - Zotac 770 - SSD Samsung 850 Pro & 830
Pump D5 with mod Bitspower - EK Supremacy - Koolance GTX680 - HW Labs SR1 280 & EK XTX 360
Not sure that HDTune can be trusted on this, my M2P shows
Supported : UDMA mode 6
Active : UDMA mode 2
whereas a C300 shows
Supported : UDMA mode 5
Active : UDMA mode 2
(both are connected to 6Gb/s ports)
--
On the bootup matter.
I timed mine and it is pretty slow!
(no hangs, except for when prompts are shown by controllers, a lot of screens flashing)
Time from Windows logo until Logon : 26 seconds
From power on switch pressed until Logon : 1 minute 46 seconds
(LSI 9211 + LSI 9265 are part of the bootup process)
-
Hardware:
There is a new driver at driver-station (could be a week or so)
3.1.0.1085 (GUI shows 3.1.0.1068)
Testing it right now.
-
Hardware:
it is those that I've installed and the boot is long
and RST 11.1.0.1006 modded are better, I will put the screens tomorrow
Sorry for my english
LL A77F - Asus Rampage V Extreme - 5930K - Corsair Platinum 4x4Go 3000 C15 - Zotac 770 - SSD Samsung 850 Pro & 830
Pump D5 with mod Bitspower - EK Supremacy - Koolance GTX680 - HW Labs SR1 280 & EK XTX 360
The new driver looks to perform about the same as the previous one.
Boot-up so far has been 2-3 seconds slower, it might settle the next few boot-ups though.
Will be keeping it for a few days, might try the 11 series drivers or go back to 3.0.1.7016.
-
Hardware:
Bookmarks