Results 1 to 25 of 149

Thread: X79 Storage - the Intel C600 series chipset SATA RAID Controller

Hybrid View

  1. #1
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I almost forgot, SMART does not work like it used to, not sure what has changed, maybe it's due to that drives look to be handled differently. (more SCSI like)

    This is what CDM looks like.



    Interestingly, the Intel SSD Toolbox is not capable of reading/displaying SMART.
    (both Drive Details and SMART Details are disabled)


    It is noted that the SMART feature is disabled but there is nowhere to enable it except for an option in the "bios" and that option is Enabled already.
    Last edited by Anvil; 12-01-2011 at 04:37 PM.
    -
    Hardware:

  2. #2
    Xtreme Member
    Join Date
    Sep 2009
    Posts
    190
    FWIW I also found a reduction in performance on P67 with 3.x driver and Agility 3. The performance seemed very similar to what was had with 11.5 driver. 11.0 had better throughput and IIR correctly Anvil bench marks were something like ~7500 with 11.0 and ~5600 with 11.5 / 3.0.

  3. #3
    Lord Micron
    Guest
    Quote Originally Posted by some_one View Post
    FWIW I also found a reduction in performance on P67 with 3.x driver and Agility 3. The performance seemed very similar to what was had with 11.5 driver. 11.0 had better throughput and IIR correctly Anvil bench marks were something like ~7500 with 11.0 and ~5600 with 11.5 / 3.0.
    Unless you hacked the Inf files, I'm not sure how you installed it on a P67, being that the RSTe 3.x drivers are only intended for the C600 chipset that only the the X79 boards currently have.
    Last edited by Lord Micron; 12-02-2011 at 10:31 AM.

  4. #4
    Xtreme Member
    Join Date
    Sep 2009
    Posts
    190
    No need to hack anything, just goto the device manager and install the driver manually. Windows will probably tell you it's not for that HW but will let you install it anyways. For me there is a performance hit so not much point using it at this time but good to know it can be done should TRIM in RAID0 not eventuate for P67.

    I did try to run some Anvil BM's with different drivers but something went astray on W8.

  5. #5
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by some_one View Post
    ...
    I did try to run some Anvil BM's with different drivers but something went astray on W8.
    I'll have a look at it, quite a few things have changed on W8 so I will most likely have to make some changes.
    I've got W8 in a VM but no time to play, it is a bit early as well, most likely things will change in February.
    You could try running it "as Administrator" just to see what happens. (meaning it could be related to user-rights)

    From what I'm seeing I'd say stay on the 10-11 series orom's/drivers on the Z68/x67, the 3.n does not allow for "Smart Response Technology", short for SRT (using an SSD for cache) and from the looks of it it is taking a new direction.
    There might not be full support for the 3.n series chipset/PCH on the "plain" Sandy Bridge platform.
    (time will tell, TRIM on arrays in particular)
    -
    Hardware:

  6. #6
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    A bit more on the new GUI

    How to create a raid volume (raid 1/5/10/0)

    Step 1

    Selecting controller ! (wonder what other controllers they have in mind)

    Selecting volume type



    Except for the GUI changes there are no new features vs the older driver.



    The Andvanced Tab is no different from 10/11


    You need to tick "Proceed with deleting data" in order to create the volume.



    Once the volume is created we can have a closer look at what we can do/change.

    As I did not utilize the full size of the array there is still space left for another small volume or one can Increase the size, just like on the 10/11 series.
    Disk data cache can be Enabled or Disabled. (have not checked this one out yet)

    I have tried to find out if this has any effect on benchmarks and so far there is no change, on top of this it is not possible to Disable this option, you can make the change but once you reboot it is back to Enabled.

    "quote from the help file"
    Enabling Disk Data Cache
    Enabling the disk data cache for all disks on the array allows you to enable cache memory physically present on the disks and use it to speed up data access. This action is only available from the Array Properties pane because the data cache must be in the same state across all disks that are part of a single array.In the Array Properties pane, the disk data cache is reported as enabled or disabled for all disks in the array. In the Disk Properties pane, the disk data cache is reported as enabled or disabled for a specific disk that is part of that array. The option to change this setting is only available from the Array Properties pane.




    If we click on the volume (inside the array) we get to the Volume properties. (as shown below)
    The properties looks to be exactly the same as on 10/11.



    All in all there is not much new on the configuration side except for the GUI and I personally prefer the new one.
    Last edited by Anvil; 12-03-2011 at 05:54 AM.
    -
    Hardware:

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •