MMM
Page 1 of 2 12 LastLast
Results 1 to 25 of 26

Thread: PCI-E Overclocking for Raid Controller Performance

  1. #1
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029

    PCI-E Overclocking for Raid Controller Performance

    Wow – I had seen several folks claim that ocing pcie speed would improve raid card performance – impressive!
    Granted this is only one test case - only one test set up - but I haven’t seen AS SSD scores like this since I soft raided two 1231ML’s together.
    I think I might have another go at that pcmark vantage record again.

    Ok – so test set up – GGBT UD7 mobo, W3520 cpu, one Areca 1231ML-4G with 12xRaid0 Acard 9010 – pcie voltage was uped one click to 1.54v.

    CAUTION - the Areca 1231ML-4G didn't like pcie 120, performance was fine while running but the card would not restart after running at pcie 120.
    To fix, I had to remove card, remove controller memory, apply power, remove card, reinsert controller memory, reinsert card, reboot - to get the controller to boot - this happened twice.

    Impressive what pcie oc can do! –



    The copy benchmarks were not as high as expected but the rest of it looks pretty good.

    Last edited by SteveRo; 02-18-2010 at 03:13 PM.

  2. #2
    Xtreme Addict
    Join Date
    Feb 2007
    Posts
    1,674
    I wonder if it helps ich10r... That is one beast of a setup.

  3. #3
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    yes it does also help ich10r. wow steve-o i am amazed you haven't been doing pci-e all along., its simple guys, bump the volts and the pcie speed and see what you get. same with ich10r, make sure that it is properly cooled however, bump yer volts and watch her fly! there is a sweet spot with pcie clocks however, and yoiu can go to high, introducing serious stabil8ity issues, and there is also a point of diminishing returns, which you can see in your graph above.
    Steve-o : another trick, mess with your PCI-E packet sizes, you can get some great returns from that, just be careful if running sli or tri sli (spc tri) cause if you set the packets too high then the vid cards end up waiting, introducing stuttering into yoiur video. most raid cards are designed to work at the maximum packet size that your motherboard allows. in essence when you are overclocking pcie you are allowing packets to travel faster, now imagine if you make the packets larger also! it can effect latency however also, so you need to find your happy medium, but it is usually never the default packet size of 128. i get big results at 1048.
    Note: some raid cards have built in protection and will not allow you to boot with really high pci-e clocks, they go into a self protection mode.
    Last edited by Computurd; 02-15-2010 at 07:31 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  4. #4
    Banned
    Join Date
    May 2009
    Posts
    676
    sorry, steve, just can't see the advantage of buying 12x9010 for it's cost & performance,
    they showed 16*X25-E's at TOM's doing 3430MBps R&W at 4MB blocks on 2xLSI 9210-8i soft RAID'd HBA's.

    each doing ~214.

    take a on trying that PCI-e OC though,
    these are very impressive results.

    & nice post Computurd .

  5. #5
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Quote Originally Posted by Computurd View Post
    yes it does also help ich10r. wow steve-o i am amazed you haven't been doing pci-e all along., its simple guys, bump the volts and the pcie speed and see what you get. same with ich10r, make sure that it is properly cooled however, bump yer volts and watch her fly! there is a sweet spot with pcie clocks however, and yoiu can go to high, introducing serious stabil8ity issues, and there is also a point of diminishing returns, which you can see in your graph above.
    Steve-o : another trick, mess with your PCI-E packet sizes, you can get some great returns from that, just be careful if running sli or tri sli (spc tri) cause if you set the packets too high then the vid cards end up waiting, introducing stuttering into yoiur video. most raid cards are designed to work at the maximum packet size that your motherboard allows. in essence when you are overclocking pcie you are allowing packets to travel faster, now imagine if you make the packets larger also! it can effect latency however also, so you need to find your happy medium, but it is usually never the default packet size of 128. i get big results at 1048.
    Note: some raid cards have built in protection and will not allow you to boot with really high pci-e clocks, they go into a self protection mode.
    I just changed to the UD7 mobo, my previous board was the GGBT X58 extreme which would not boot above 102, 103 pcie.
    I will look for packet size - I haven't been able to find it in the ud7 bios yet.
    Anyone else with a UD7? - where is the pcie packet size adjustment?
    Last edited by SteveRo; 02-16-2010 at 02:23 AM.

  6. #6
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Quote Originally Posted by onex View Post
    sorry, steve, just can't see the advantage of buying 12x9010 for it's cost & performance,
    they showed 16*X25-E's at TOM's doing 3430MBps R&W at 4MB blocks on 2xLSI 9210-8i soft RAID'd HBA's.

    each doing ~214.

    take a on trying that PCI-e OC though,
    these are very impressive results.

    & nice post Computurd .
    Good morning Onex,

    I am actually using six acard 9010 "boxes", each set to raid 0 - so 12xRaid0.

    Yes, the acard 9010s are very slow at sequential ops by today's SSD standards.
    However, you can use a raid card to compensate for low sequential speeds.
    Where the acards excel (relative to flash SSD) is in small file random ops, particularly small random writes.
    While raid controllers can improve sequential read/writes, they can - generally - do little to significantly improve small file random iops.
    Generally - if you have drives that have bad random performance you can improve that performance a little (with a good raid card) but not much.
    This random iops performance combined with the total lack of write degradation are the acard positives -(when compared to flash based SSDs).
    I agree with you totally on the cost - especially since DDR2 has gone up in price so much in the past year.
    Acards are not the way to go for a "best bang for the buck" solution - my preference right now for that catagory is Anvil's 4x - x25-V on ich10R.
    Last edited by SteveRo; 02-16-2010 at 02:52 AM.

  7. #7
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    CAUTION - the Areca 1231ML-4G didn't like pcie 120, performance was fine while running but the card would not restart after running at pcie 120.
    To fix, I had to remove card, remove controller memory, apply power, remove card, reinsert controller memory, reinsert card, reboot - to get the controller to boot - this happened twice.
    Last edited by SteveRo; 02-16-2010 at 02:43 AM.

  8. #8
    Banned
    Join Date
    May 2009
    Posts
    676
    it happened here the same, with adding some MH/z to the PCI-e connection,
    5MH/z add has made the HDD click, the DVD malfunction and the G.C to pixel the screen,
    had to remove all of the components, reset the CMOS, the RTL clock and disconnect the power until the MB was power free,
    left it like that for few days thinking it was all lost,
    then reconnected it and it went like nothing,
    these PCI-e tunings are HW dependent, seems like every part handles them differently,
    and can cause serious system bugs and instabilities..
    Last edited by onex; 02-16-2010 at 04:12 AM.

  9. #9
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Onex,

    See - http://www.xtremesystems.org/forums/...d.php?t=215670 for a good compare of acard 9010 to the intel x25-e.

  10. #10
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    I am going to buy one two-port acard to play with. 16gb will be plenty for a boot array and run the 9211 with 4x x25-e in dynamic raid 0 .

    Steve - I figure RAID 0 doesn't help game loading times with ACARDs? Does it get any better from 2 to 4 to 6 to 8?

  11. #11
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Quote Originally Posted by One_Hertz View Post
    I am going to buy one two-port acard to play with. 16gb will be plenty for a boot array and run the 9211 with 4x x25-e in dynamic raid 0 .

    Steve - I figure RAID 0 doesn't help game loading times with ACARDs? Does it get any better from 2 to 4 to 6 to 8?
    When it comes to game loading times I doubt 2 vs 4 vs ... acards drives makes much difference.
    1 to 2 might make some difference - probably not a lot though.
    16GB DDR2 memory may set you back a bit.
    Let me know where you end up buying - I might pick up some more myself.

  12. #12
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Dynamic raid - should be very interesting!
    How about something like this -
    three arrays on the ich10R and 9211 like this -
    array 1 - 2xR0 acard on ich10R
    array 2 - 2xR0 x25-e on 9211
    array 3 - 2xR0 x25-e on 9211
    soft raid arrays 1, 2 and 3 together!

    Another option to try - switch arrays 1 and 2 - array 1 on 9211 and array 2 on ich10
    soft raid - all together!

  13. #13
    Xtreme Member
    Join Date
    Feb 2007
    Location
    South Texas
    Posts
    359
    Interesting results. My fileserver is setup with a Perc 5/i in the x16 slot of a mATX board. I can't recall if it allows for pcie freq changing. I know that my evga 7150 does however, and it might be worth swapping out motherboards.
    ASRock X399 Fatal1ty
    1950x Threadripper
    32gb DDR4
    GTX 1070
    __________________________________________________ ____

  14. #14
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by SteveRo View Post
    Let me know where you end up buying - I might pick up some more myself.
    I got quoted $298 CAD + tax for cheapest 8x 2GB sticks. Seems reasonable to me. Then another 350usd for the acard itself. Will end up being 700ish. Pretty expensive for 16gb.

  15. #15
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Quote Originally Posted by One_Hertz View Post
    I got quoted $298 CAD + tax for cheapest 8x 2GB sticks. Seems reasonable to me. Then another 350usd for the acard itself. Will end up being 700ish. Pretty expensive for 16gb.
    $298 for 16Gb sounds pretty good compared to what I see down here.
    Is the exchange rate about even now?

  16. #16
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by SteveRo View Post
    $298 for 16Gb sounds pretty good compared to what I see down here.
    Is the exchange rate about even now?
    USD is worth about 5% more than CAD. Newegg.com can do it for $300USD flat so I am getting just a little bit cheaper than that.

  17. #17
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    @onex...you have to find the right place for your volts vs pcie clock, sometimes too much voltage can be worse than not enough.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  18. #18
    Banned
    Join Date
    May 2009
    Posts
    676
    ^^ it's a first bought feature-basic MB which doesn't support any voltage tweaking at all, so no-go for it .

    See - http://www.xtremesystems.org/forums/...d.php?t=215670 for a good compare of acard 9010 to the intel x25-e.
    thanks for fishing it , ill soon have it looked
    Last edited by onex; 02-16-2010 at 06:03 PM.

  19. #19
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by SteveRo View Post
    I just changed to the UD7 mobo, my previous board was the GGBT X58 extreme which would not boot above 102, 103 pcie.
    I will look for packet size - I haven't been able to find it in the ud7 bios yet.
    Anyone else with a UD7? - where is the pcie packet size adjustment?
    Amazing results SteveRo,

    My X58-UD5 wouldn't boot at 105 so I've ordered the UD7

    Regarding the packet size, it may be a EVGA feature.
    -
    Hardware:

  20. #20
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    oh it is a feature on many many boards, some do not support it though
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  21. #21
    Banned
    Join Date
    May 2009
    Posts
    676
    some GB boards are supposed to be limited at this and needs to be hard modded to support higher frequencies,
    seems they changed it with the UD7 though^^.

  22. #22
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    actually i was refering to the pcie packet expansion.

    here is some info i dug up

    After a bit of googling, here's what I've come up with:
    The Max Payload Size and TLP are functions related to the PCI-Express bus
    The PCI Express
    bus uses three "layers" of communication (similar to the layers in the OSI Model (http://en.wikipedia.org/wiki/Osi_model)) -- physical, data link, and transaction.
    The transaction layer is where all "real" communication takes place (the physical layer is the wires, and the data link layer controls/maintains the link between PCI Express devices)
    The smallest chunk of data on the transaction layer is known as a "Transaction Layer Packet" or TLP (similarly, the smallest chunk of data on the data link layer is known as a "Data Link Layer Packet" or DLLP)
    A TLP consists of both a header (containing packet information), a payload (containing the actual data to be transmitted), and possibly an ECRC (containing error checking info)
    The Max Payload Size sets the maximum size of the payload that any TLP can contain. Devices must be able to recieve payloads that large, and must not send payloads any larger.


    Essentially what I can gather is that the Max Payload Size sets just how much actual data can be transfered within a single packet along the PCI-Express bus. Increasing the size would increase the theoretical throughput of the link (as more data would be payload rather than header/crc/etc), though does not respond as well to errors (if a single bit is incorrect, all 4096 bytes will be thrown out for retransmission). Personally I'd leave it at the highest value (should be 4096 from what I can tell) since I doubt many packets will be getting corrupted unless you're in a really electrically noisy environment.

    JigPu
    Last edited by Computurd; 02-17-2010 at 07:36 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  23. #23
    Banned
    Join Date
    May 2009
    Posts
    676
    actually i was refering to the pcie packet expansion.
    oki, it's good for both^^,

    here's some more info,
    The figure below shows PCIe protocol efficiency as a function of payload length, assuming a 16-byte header and
    the use of ECRC (4 bytes). The top line shows the efficiency of a single packet considered alone.


    http://www.plxtech.com/files/pdf/tec...yload_Size.pdf

    i think it should be tested with the controllers though.

  24. #24
    Banned
    Join Date
    May 2009
    Posts
    676
    See - http://www.xtremesystems.org/forums/...d.php?t=215670 for a good compare of acard 9010 to the intel x25-e.
    o.k, viewed the thread, final (prob) thoughts with this is other then benching it isn't worth it,
    performance seems very accurate and should produce preciser benching figures,
    this is the only purpose seems to be an advantage using ram vs ssd's,

    it's a shame they didn't put an extra SATA port&controller on the PCB for dual mode, the modules are highly capable, yet saturated at the protocol max ~300MBps.
    they could've doubled it with some prob. minor extra cost.

    currently here, can't see any advantage then the earlier specified over a 16GB RAM disk with a UPS,
    it should be cheaper then setting up an ACARD 'rig', much faster and could be disconnected as needed or erased/assembled in seconds while using a nice RC/HBA or even the ICH10R and few SSD's.
    for anyone crunching/folding DC's i.e leaving the computer 24/7, a RAM drive should be a very nice tool, no need for necessarily dealing with DDR2 and all the bundle,
    again, with a fast storage solution, it could be really sweet.

    the only thing that can't figure out here is whether u'r using 6 ACARD's dual ported&RAID'ed i.e 12 singles or an actual amount of 12 i.e 24 options,
    after seeing anvil ultra impressive results with the 4*X25-V's on the onboard raid, u should really try setting the 1231ML softRAID'ed with these.

    for the record,
    i think softraiding an HBA (not a RC..) i.e the 9210i* etc. with the ICH10R should give beautiful scaling both on the lower 4k,8k etc. IOPs and push the limits with the bigger loads up to 1.5GBps and it would be very interesting to watch some other companies products i.e areca's HBA's 1300 series and even Intels SASMF8I, or maybe just areca .

    -it's a shame most product in the market are not even review by customers or professional sites, even worse, reviews aren't trying to test the edges of the cards and some reviews even tests them with 4-6HDD!
    nothing like XS with this .-
    it's wonderful at least here people are willing to push it .

  25. #25
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    @the intel HBA will be an rebranded LSI. LSI makes all raid products for intel now.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

Page 1 of 2 12 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •