MMM
Results 1 to 25 of 103

Thread: Bolt-on RAM/VRM sinks for 4850/4870?

Threaded View

  1. #25
    Xtreme Addict
    Join Date
    Mar 2007
    Posts
    1,489
    Quote Originally Posted by dnottis View Post
    I had a similar setup. After 3 weeks the card died. A 120 was sitting right over the card, VRM, core temps all in check too. Good luck with yours.

    I feel this is very important and needs to be shared. While dealing with the HD4870 I was able to pose some questions to an engineer at ATI. One of the responses that concerned me most was this one -

    “when a user replaces our stock cooler with something that is more capable (eg water cooling or higher capacity integrated fan), RV770 XT GPU will draw significantly more current and cause the regulator to hit its temperature limit at 125C.”

    Now basically what he is saying was that when not using the stock fan, the card will draw more current that it otherwise would. Alot of people have blamed the deaths of the HD4870s on me water cooling them, I guess it's possible and imo this may have been the reason. The part of the VRM hitting 125C was pre 8.8 Catalyst where the VRM would almost shoot up to 125c right away. I believe the important part to take away from this is that the HD4870 without the stock fan connected to the header can pull more current than it would with the 3 pin header connected to the PCB, possibly the reason my two cards died. Be careful.


    I'm not in any way trying to be rude here, but until I see some type of white papers, what that engineer said sounds like complete and utter crap.

    I've worked at an electronics manufacturer for the last ten years. I've taken multiple classes in AC and DC electrical theory, and solid state circuit design. I haven't cared to finish a degree yet, but I'm not that far, trust me.

    There is no logical way that a card would "sense" that it had its fan removed, and then for some reason all of a sudden start sucking more power. It just doesn't add up to me. There is no way that I know of that cooling a component will suddenly cause it to start drawing more current, nor can I see any reason that they would design the card to self destruct with aftermarket cooling which is basically what you have quoted from the AMD engineer.

    What I am guessing is that this guy does not quite know what he is talking about. Perhaps he means that when people put aftermarket cooling on and overclock the card it can still overheat, but even that doesn't make any sense from what I've seen.


    Remember, I've had two (now three) cards testing for going on two months now running perfectly cool and without the slightest sign of stress or strain. The reported current draw in gpu-z never increased upon removal of the stock cooler on any of the cards.
    Last edited by iandh; 09-11-2008 at 06:59 AM.
    Asus G73- i7-740QM, Mobility 5870, 6Gb DDR3-1333, OCZ Vertex II 90Gb

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •