Page 25 of 30 FirstFirst ... 1522232425262728 ... LastLast
Results 601 to 625 of 730

Thread: OCCT 3.1.0 shows HD4870/4890 design flaw - they can't handle the new GPU test !

  1. #601
    Xtreme Member
    Join Date
    Jan 2005
    Location
    No(r)way
    Posts
    452
    Quote Originally Posted by [XC] riptide View Post
    Because we all know that you could use a very similar piece of code for a nice bit of graphics in a lot of scenarios. Thats why.
    Do we really? It has not happened yet in games as far as I know, and from the explanation AMD gave it will only happen in artificially created load scenarios (not even GPGPU stuff), hence I am perfectly happy with it.
    Quote Originally Posted by truehighroller
    I still think this is bull my self. I want it fixed. I will call their asses because I seriously think that I am having some sort of issue atleast wth my one video card. I think my other one might be fine. I have noticed that my one card gets way hotter then the other one and yes I know the one will put heat right at the one above it. I don't know I will do some further trouble shooting but, IMO with all the fans and heatsinks I have I shouldn't be having issues like this. Which Heatsink is best to use with these things alol aound for the VRMs? I still have the original heatsinks if some here can tell me how to use the plate to cool these things down better I would be greatfull.
    I think perhaps you should have done a little more research before putting one a third party cooling solution.
    Obsolescence be thy name

  2. #602
    Xtreme Addict
    Join Date
    Feb 2008
    Posts
    1,118
    Quote Originally Posted by Frodin View Post
    Do we really? It has not happened yet in games as far as I know, and from the explanation AMD gave it will only happen in artificially created load scenarios (not even GPGPU stuff), hence I am perfectly happy with it.

    I think perhaps you should have done a little more research before putting one a third party cooling solution.
    I think that perhaps you should have done some more reading before calling every one idiots. I have been having this random crash issue for awhile now and believed it to be power related so I bought an add in booster and the heatsinks and the issue still rears it's ugly head every now and then.

    I think that my first 4870 1GB card is the culprit because it is getting to hot VRM wise. I did research and not to many people seemed to be complaining that much and the people that were usualy stopped posting after some one told them to put a sink here or there. Probably because it has kept the issue at bay a little more like in my situation to where they were just like ahh f it and left it at that.


    I how ever would like no crashes at all because of this over heating issue with the damn VRMs. I have been in the scene for awhile now and do have a decent amount of knowledge when it comes to overclocking and tweaking and building PCs so don't treat me like I don't.
    Last edited by truehighroller; 05-30-2009 at 07:49 AM.
    _____________________________________________



    Rig = GA-P67A-UD3P Rev 1.0 - 2600K @ 5.2~GHz 1.5v~, 1.489~v Under Load - Swiftech Water Cooling - 2 X 4GB Corsair DDR3 2000Mhz @ 1868MHz~ 9,10,9,27 @ 1.65v~ - Asus 6970 @ 950MHz / 1450MHz - 3x Western Digital RE3 320Gb 16Mb Cache SataII Drives in Raid0 - Corsair HX 850w Power Supply - Antec 1200 Case - 3DMark 11 Score = P6234 - 3DVantage Score = P26237

  3. #603
    Xtreme 3D Team
    Join Date
    Jan 2009
    Location
    Ohio
    Posts
    8,499
    Tetedeiench, are any 4850's affected? I mean they are just 4870's without GDDR5...I've just updated to 3.1.0 from 3.0.1 today and I'll run it 4850 stock, (625/1986) then 680/2100 (my ASUS top factory default) then 715/2150 which is my current overclock to see what works and doesn't work.
    Smile

  4. #604
    Xtreme Member
    Join Date
    Jan 2005
    Location
    No(r)way
    Posts
    452
    Quote Originally Posted by truehighroller View Post
    I think that perhaps you should have done some more reading before calling every one idiots. I have been having this random crash issue for awhile now and believed it to be power related so I bought an add in booster and the heatsinks and the issue still rears it's ugly head every now and then.

    I think that my first 4870 1GB card is the culprit because it is getting to hot VRM wise. I did research and not to many people seemed to be complaining that much and the people that were usualy stopped posting after some one told them to put a sink here or there. Probably because it has kept the issue at bay a little more like in my situation to where they were just like ahh f it and left it at that.


    I how ever would like no crashes at all because of this over heating issue with the damn VRMs. I have been in the scene for awhile now and do have a decent amount of knowledge when it comes to overclocking and tweaking and building PCs so don't treat me like I don't.
    I did not call you or anyone else an idiot, I merely suggested that you could have done some more research before changing the cooling solution. It never hurts to do a little more research. There was a huge thread about VRM cooling in the water cooling section just after the 4870 was released (around last summer?), and I remember reading a thread in the air cooling section as well about someone having trouble with overheating with the ramsinks that came with a Thermalright T-rad. Largon suggested on the previous page that your VRM cooling was inadequate, no, actually, he said they were terribly inadequate:
    Quote Originally Posted by largon
    That tiny ramsink'ish thing you have on the VRMs is terribly inadequate for Volterra chips.
    Obsolescence be thy name

  5. #605
    Xtreme Mentor
    Join Date
    May 2008
    Posts
    2,554
    Quote Originally Posted by truehighroller View Post
    I think that perhaps you should have done some more reading before calling every one idiots. I have been having this random crash issue for awhile now and believed it to be power related so I bought an add in booster and the heatsinks and the issue still rears it's ugly head every now and then.

    I think that my first 4870 1GB card is the culprit because it is getting to hot VRM wise. I did research and not to many people seemed to be complaining that much and the people that were usualy stopped posting after some one told them to put a sink here or there. Probably because it has kept the issue at bay a little more like in my situation to where they were just like ahh f it and left it at that.


    I how ever would like no crashes at all because of this over heating issue with the damn VRMs. I have been in the scene for awhile now and do have a decent amount of knowledge when it comes to overclocking and tweaking and building PCs so don't treat me like I don't.
    I posted a thread once about vrm temps on my GTX280 which as I've mentioned uses the same vrms and Unwinder replied saying that the upper limits of these vrms are around 150c. Even on this stress test which is just crazy you are pretty far away from that. So, I doubt temps are your problem. Your vrms could be cooler but theyre not dangerously hot. What temps do you see while gaming? You may just have a bad card, it happens but your crash is not related to this "issue".

    Frodin- Do you have any links to those threads. They sound like a good read, thanks.
    Last edited by BababooeyHTJ; 05-30-2009 at 08:51 AM.

  6. #606
    Xtreme Addict
    Join Date
    Feb 2008
    Posts
    1,118
    Quote Originally Posted by BababooeyHTJ View Post
    I posted a thread once about vrm temps on my GTX280 which as I've mentioned uses the same vrms and Unwinder replied saying that the upper limits of these vrms are around 150c. Even on this stress test which is just crazy you are pretty far away from that. So, I doubt temps are your problem. Your vrms could be cooler but theyre not dangerously hot. What temps do you see while gaming? You may just have a bad card, it happens but your crash is not related to this "issue".

    Frodin- Do you have any links to those threads. They sound like a good read, thanks.

    I will have to start digging deeper then I guess and will pull Logs and see exactly what the issue is, I very well might have a card going bad. I will play some games and keep logging enabled that way when everything crashes I can look at what the temps were right before the crashes. I would like to see those post as well. The one about air cooling might very well be the one I was looking around on already. Does any one know how to use the original plates that come with the coolers for cooling these things?
    _____________________________________________



    Rig = GA-P67A-UD3P Rev 1.0 - 2600K @ 5.2~GHz 1.5v~, 1.489~v Under Load - Swiftech Water Cooling - 2 X 4GB Corsair DDR3 2000Mhz @ 1868MHz~ 9,10,9,27 @ 1.65v~ - Asus 6970 @ 950MHz / 1450MHz - 3x Western Digital RE3 320Gb 16Mb Cache SataII Drives in Raid0 - Corsair HX 850w Power Supply - Antec 1200 Case - 3DMark 11 Score = P6234 - 3DVantage Score = P26237

  7. #607
    Xtreme Member
    Join Date
    Aug 2007
    Location
    Aarhus, Denmark
    Posts
    314
    Quote Originally Posted by Shadov View Post
    Unless you use specially prepared code that has nothing to do with real applications or games. Why do you fail to acknowledge that every time?

    Yeah, we know "its your baby", but cmon!
    Because it's being done using standard API calls, meaning anybody could do it !! So IMO it's a design flaw when the card can't handle 100 % load at default clocks.
    AMD Ryzen 9 5900X
    ASRock Radeon RX 7900 XTX Phantom Gaming OC
    Asus ROG Strix B550-F Gaming Motherboard
    Corsair RM1000x SHIFT PSU
    32 GB DDR4 @3800 MHz CL16 (4 x 8 GB)

    1x WD Black SN850 1 TB
    1 x Samsung 960 250 GB
    2 x Samsung 860 1 TB
    1x Segate 16 TB HDD

    Dell G3223Q 4K UHD Monitor
    Running Windows 11 Pro x64 Version 23H2 build 22631.2506

    Smartphone : Samsung Galaxy S22 Ultra

  8. #608
    Moderator
    Join Date
    Mar 2006
    Posts
    8,556
    Quote Originally Posted by Frodin View Post
    Do we really? It has not happened yet in games as far as I know, and from the explanation AMD gave it will only happen in artificially created load scenarios (not even GPGPU stuff), hence I am perfectly happy with it.
    Ya. But game engines change. More powerful. People code graphics on little apps all the time. New GPGPU stuff is rolling off all the time. It annoys me because it LIMITS the way a developer can optimize his code for this particular card, IF he had a code similar to the one we see here. The card should never fault because of a cheap power design, ON STOCK CLOCKS AND VOLTS.

    What is encouraging for all of us in the future is AMD's acknowledgment, and addressing of the issue, however unique it is.

  9. #609
    Muslim Overclocker
    Join Date
    May 2005
    Location
    Canada
    Posts
    2,786
    Quote Originally Posted by Tetedeiench View Post
    Well, i've got news from you, this has been confirmed from AMD themselves now, according to hardware.fr, a reference in france as you can tell by the name. Here's the article :
    http://www.hardware.fr/news/10235/ra...-probleme.html

    They reproduced the problem, and here is AMD's official position, which is interesting (let me translate interesting parts for you, besides the "we confirm the issue" for you, you can check it with google trad if you want) : (oh, and TDP = Thermal design power = http://en.wikipedia.org/wiki/Thermal_design_power )


    teom
    You said it was when it reaches the 82a/83a line, which I said is not true because I did reach above 83 and it functioned normally. Thats besides the point now

    So according to ATI, if this is a protection measure on the reference design, but does not exist on non-reference designs, then how the heck do we know that those components won't eventually blowup? I mean do we know that the AIB's are not using reference components (besides the asus super)?

    Seems a bit risky.. someone willing to run occt for a few days on their non-reference to see what happens?

    On second thought, what ATI said makes sense. It is no practical scenario where you are stressing all the units including texturing and computing units at the same time... thats like having a game scene that needs no I/O, doing ridiculously inefficient calculations in the back to use up all the shaders...

    No meaningful calculation can max the GPU at 100% in all components 100% of the time. Unless it is an infinite loop with no purpose.
    Last edited by ahmad; 05-30-2009 at 10:12 AM.

    My watercooling experience

    Water
    Scythe Gentle Typhoons 120mm 1850RPM
    Thermochill PA120.3 Radiator
    Enzotech Sapphire Rev.A CPU Block
    Laing DDC 3.2
    XSPC Dual Pump Reservoir
    Primochill Pro LRT Red 1/2"
    Bitspower fittings + water temp sensor

    Rig
    E8400 | 4GB HyperX PC8500 | Corsair HX620W | ATI HD4870 512MB


    I see what I see, and you see what you see. I can't make you see what I see, but I can tell you what I see is not what you see. Truth is, we see what we want to see, and what we want to see is what those around us see. And what we don't see is... well, conspiracies.



  10. #610
    Xtreme Member
    Join Date
    Oct 2006
    Posts
    247
    Quote Originally Posted by [XC] riptide View Post
    Ya. But game engines change. More powerful. People code graphics on little apps all the time. New GPGPU stuff is rolling off all the time. It annoys me because it LIMITS the way a developer can optimize his code for this particular card, IF he had a code similar to the one we see here. The card should never fault because of a cheap power design, ON STOCK CLOCKS AND VOLTS.

    What is encouraging for all of us in the future is AMD's acknowledgment, and addressing of the issue, however unique it is.
    I agree that its nice that ATi is going to address the issue.

    BUT IMHO its not an issue, ATi/AMD has said it themselves, the only way this could ever happen is if you deliberately go out of your way. This has nothing to do with "optimizations" that game makers may try and use. Standard API calls or not, its a power virus pure and simple

    Using that same argument I can say every harddrive ever made has a design flaw because you can format it(using "standard", built in tools) and lose all data. The only way a HDD will become formated is if I go out of my way, and well, run a format on it.

    Quote Originally Posted by ahmad View Post
    On second thought, what ATI said makes sense. It is no practical scenario where you are stressing all the units including texturing and computing units at the same time... thats like having a game scene that needs no I/O, doing ridiculously inefficient calculations in the back to use up all the shaders...

    No meaningful calculation can max the GPU at 100% in all components 100% of the time. Unless it is an infinite loop with no purpose.
    This.

    Same is true of CPU's, just because the task manager and Prime95, Linpack, OCCT, etc, etc, says your using 100% of the CPU, doesn't mean your ACTUALLY using 100% in all CPU die components 100% of the time, simply that your using up 100% of the CPU's time.
    Last edited by Vienna; 05-30-2009 at 10:19 AM.

    Core 2 Duo(Conroe) was based on the Intel Core Duo(Yonah) which was based on the Pentium M(Banias) which was based on the Pentium III(Coppermine).

    Core 2 Duo is a Pentium III on meth.

  11. #611
    Xtreme Addict
    Join Date
    Jul 2007
    Posts
    1,488
    I'm happy that this is how it turned out. I was kind of expecting AMD to ignore the problem. But perhaps the same mindset that is working for their processor division ( interfacing with the enthusiast community ) has spread to other divisions. Even though overclockers are a small market segment, and the problem is an extreme corner case, they still listened and are redesigning the chip and board. I would suggest that far from the blunder that many have claimed, this could actually bode well for their reputation.

  12. #612
    Xtreme Addict
    Join Date
    Apr 2008
    Location
    Texas
    Posts
    1,663
    Quote Originally Posted by Solus Corvus View Post
    I'm happy that this is how it turned out. I was kind of expecting AMD to ignore the problem. But perhaps the same mindset that is working for their processor division ( interfacing with the enthusiast community ) has spread to other divisions. Even though overclockers are a small market segment, and the problem is an extreme corner case, they still listened and are redesigning the chip and board. I would suggest that far from the blunder that many have claimed, this could actually bode well for their reputation.
    +1 to that. Lets see what the R8xx series will do .
    Core i7 2600K@4.6Ghz| 16GB G.Skill@2133Mhz 9-11-10-28-38 1.65v| ASUS P8Z77-V PRO | Corsair 750i PSU | ASUS GTX 980 OC | Xonar DSX | Samsung 840 Pro 128GB |A bunch of HDDs and terabytes | Oculus Rift w/ touch | ASUS 24" 144Hz G-sync monitor

    Quote Originally Posted by phelan1777 View Post
    Hail fellow warrior albeit a surat Mercenary. I Hail to you from the Clans, Ghost Bear that is (Yes freebirth we still do and shall always view mercenaries with great disdain!) I have long been an honorable warrior of the mighty Warden Clan Ghost Bear the honorable Bekker surname. I salute your tenacity to show your freebirth sibkin their ignorance!

  13. #613
    Registered User
    Join Date
    Jan 2009
    Posts
    15
    the cards can never reach 83 A in games ???
    HA i say HAAAA
    my Gainward 4870 reference already goes to 72A in crysis at 1600x1200 no aa no af catalyst driver defaults 9.5 after about 5 minutes
    and it crashes after 30 mins -1 hour ...seems to be very random maybe it spikes to 83 A causing it to crash

    don't use the riva osd ingame because you will get false data .let riva monitor your card in the background
    now imagine i used a higher rezolution like 2500 ....it would crash wouldn't it ?
    i say HAAA again

  14. #614
    Xtreme Addict
    Join Date
    Jul 2007
    Posts
    1,488
    Since it takes so long to happen, it doesn't really sound like the same issue. RMA your card.

  15. #615
    Registered User
    Join Date
    Jan 2009
    Posts
    15
    i just ran the crysis gpu benchmark and this is what i got

    the 72 Amps that i got were during actual gameplay using the carrier level fighting the huge alien ships
    why do my clocks go from 2dto 3d mode every damn second i don't see that happening on your card
    damn you ati or gainward or whoever fault it is XD
    i'm so tired of this problem i just want to play my games without crashing QQ
    Last edited by bluedevil; 05-30-2009 at 01:47 PM.

  16. #616
    Registered User
    Join Date
    Jan 2009
    Posts
    15
    all my crysis runs were made without AA or AF using catalyst 9.5 default settings and yes i notice a higher amperage without the AA in your screenshot
    i also notice that my clocks go from 2d to 3d mode every damn second i don't see that happening on your card !!
    can you tell me if your card is reference 4870 512 gddr5 and who produced it ? mine is gainward made
    could you please save your video bios file using gpu-z 0.3.3 (latest version) so i can try it on my card ? maybe it will make it stop swithing from 2d to 3d mode ...maybe that is the problem
    all reference cards bios should be compatible with one another
    here are some pics of my card
    http://www.netraid.com.my/images/Gainward-HD4870.jpg
    http://www.ixbt.com/video3/images/ga...4870-front.jpg
    http://www.ixbt.com/video3/images/ga...-scan-back.jpg

  17. #617
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Quote Originally Posted by bluedevil View Post
    all my crysis runs were made without AA or AF using catalyst 9.5 default settings and yes i notice a higher amperage without the AA in your screenshot
    i also notice that my clocks go from 2d to 3d mode every damn second i don't see that happening on your card !!
    can you tell me if your card is reference 4870 512 gddr5 and who produced it ? mine is gainward made
    could you please save your video bios file using gpu-z 0.3.3 (latest version) so i can try it on my card ? maybe it will make it stop swithing from 2d to 3d mode ...maybe that is the problem
    all reference cards bios should be compatible with one another
    here are some pics of my card
    http://www.netraid.com.my/images/Gainward-HD4870.jpg
    http://www.ixbt.com/video3/images/ga...4870-front.jpg
    http://www.ixbt.com/video3/images/ga...-scan-back.jpg
    Is this the HSF combo you are using with your PC? I was hoping to see an actual pic of your video card. In any case I've noticed that when dust build up at/near the vrm area of the video card it not only creates more heat but caused my video card to use more power. On a hunch I decided to clean that area out. I removed the front plate and took a can of air and sprayed in and around the vrm area. Doing so created a plume of dust. After air cleaning I noticed that my temps were lower and power consumption had went down to what you see today. If you are also experiencing the same problem this could help. I would suggest doing that. I would never recommend using bioses for situations like this so it really serves no purpose at this time. Which draws me to ask if you used or alter the bios in the past?

    For me, I've never seen the power draw you've shown when playing Crysis. So, it's either some sort of heat build up or a defective card as others have suggested to you. If the former does not work RMA the card. If you card is 2D/3D switching then remove all web browsers from your desktop and see if it stops. Better yet just create a 2D profile, add the icon to the desk top and click on it and see if that stops the 2D-3D switching.
    Last edited by Eastcoasthandle; 05-30-2009 at 02:20 PM.
    [SIGPIC][/SIGPIC]

  18. #618
    Registered User
    Join Date
    May 2005
    Posts
    31
    I got powercolour 4890 I think it has a shutdown vrm temp at 95-110*c. I notice some people are doing over 125*c in vrm. Has anyone got 1000mhz/1200mhz bios try to update the bios seem if the vrm temp limit is gone?


  19. #619
    Registered User
    Join Date
    Jan 2009
    Posts
    15
    Eastcoasthandle
    yes that is what i'm using and sorry but i don't have a camera to take any photos
    i don't want to remove the heatsink because i have no paste to reapply and from what i have read the heatsink comes all off in one piece exposing the core and memory and than you have to remove the plastic to expose the actual part where the dirt gets in .
    also i don't want my screwdriver to leave any marks on the screws because the shop could say that i messed with the card
    i closed all the applications i could and that didn't stop the 2d/3d switching
    please dude give me your bios i know how to use RBE to look in it so i can spot the differences ,ATIWinflash to flash it if i decide to and if something goes wrong i can make a autoflash disk using atiflash360 to unbrick it from dos or use a second card to flash the first one
    Last edited by bluedevil; 05-30-2009 at 02:49 PM.

  20. #620
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Quote Originally Posted by bluedevil View Post
    the cards can never reach 83 A in games ???
    HA i say HAAAA
    my Gainward 4870 reference already goes to 72A in crysis at 1600x1200 no aa no af catalyst driver defaults 9.5 after about 5 minutes
    and it crashes after 30 mins -1 hour ...seems to be very random maybe it spikes to 83 A causing it to crash

    don't use the riva osd ingame because you will get false data .let riva monitor your card in the background
    now imagine i used a higher rezolution like 2500 ....it would crash wouldn't it ?
    i say HAAA again
    Your other post

    Something is not right I've only seen 50A on mine (using AA).





    Quote Originally Posted by bluedevil View Post
    i just ran the crysis gpu benchmark and this is what i got

    the 72 Amps that i got were during actual gameplay using the carrier level fighting the huge alien ships
    damn you ati or gainward or whoever fault it is XD
    i'm so tired of this problem i just want to play my games without crashing QQ

    This is without AA. See the trend? Your frame rates are a tad lower. Your vrm temps are higher and power consumption is greater.

    Did you provide a pic of your video card? I like to see what it looks like.




    Quote Originally Posted by bluedevil View Post
    Eastcoasthandle
    yes that is what i'm using and sorry but i don't have a camera to take any photos
    i don't want to remove the heatsink because i have no paste to reapply and from what i have read the heatsink comes all off in one piece exposing the core and memory and than you have to remove the plastic to expose the actual part where the dirt gets in .
    also i don't want my screwdriver to leave any marks on the screws because the shop could say that i messed with the card
    i closed all the applications i could and that didn't stop the 2d/3d switching
    please dude give me your bios i know how to use RBE to look in it so i can spot the differences ,ATIWinflash to flash it if i decide to and if something goes wrong i can make a autoflash disk using atiflash360 to unbrick it from dos or use a second card to flash the first one
    I can assure you it's not the bios. I just ran the GPU test again and got this:


    I thought about it and wanted to know if I was averaging 55A. So far it's not. I honestly think you should get yourself some TIM, open up the HSF and clean out any dust that may have accumulated in/near the vrm area.
    Last edited by Eastcoasthandle; 05-30-2009 at 03:26 PM.
    [SIGPIC][/SIGPIC]

  21. #621
    Xtreme Addict
    Join Date
    Feb 2008
    Posts
    1,118
    Quote Originally Posted by bluedevil View Post
    Eastcoasthandle
    yes ......
    I think he has flashed it before yes because I had that same issue when trying to mod my BIOS back in the day when I first got the one I think is causing issues and it did exactly this, the whole switching from 2d to 3d crap. I ended up finding a BIOS for a reference card that was clocked higher and flashed them both to it and the issue went away.
    _____________________________________________



    Rig = GA-P67A-UD3P Rev 1.0 - 2600K @ 5.2~GHz 1.5v~, 1.489~v Under Load - Swiftech Water Cooling - 2 X 4GB Corsair DDR3 2000Mhz @ 1868MHz~ 9,10,9,27 @ 1.65v~ - Asus 6970 @ 950MHz / 1450MHz - 3x Western Digital RE3 320Gb 16Mb Cache SataII Drives in Raid0 - Corsair HX 850w Power Supply - Antec 1200 Case - 3DMark 11 Score = P6234 - 3DVantage Score = P26237

  22. #622
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Quote Originally Posted by truehighroller View Post
    I think he has flashed it before yes because I had that same issue when trying to mod my BIOS back in the day when I first got the one I think is causing issues and it did exactly this, the whole switching from 2d to 3d crap. I ended up finding a BIOS for a reference card that was clocked higher and flashed them both to it and the issue went away.
    That's what I was thinking as well. And do recall that specific issue when flashing. Perhaps you can help him by providing him a bios that could work with his video card? Maybe PM him and tell him exactly how you flashed the bios, etc? Because you and I don't have the 2D/3D switching problem. It's a thought...

    Side not:
    I already told him to create a 2D profile using CCC and create an icon for the desktop. Once he clicks on it that should stop the 2D/3D switching (for now).
    Last edited by Eastcoasthandle; 05-30-2009 at 03:22 PM.
    [SIGPIC][/SIGPIC]

  23. #623
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    Quote Originally Posted by [XC] riptide View Post
    Ya. But game engines change. More powerful. People code graphics on little apps all the time. New GPGPU stuff is rolling off all the time. It annoys me because it LIMITS the way a developer can optimize his code for this particular card, IF he had a code similar to the one we see here. The card should never fault because of a cheap power design, ON STOCK CLOCKS AND VOLTS.

    What is encouraging for all of us in the future is AMD's acknowledgment, and addressing of the issue, however unique it is.
    I mean this in the neutral way, but before you say 'it could happen', you should undertake a much deeper understanding of the GPU, the code that runs on it, and the parallels it draws to OCCT's code.

    I don't know much myself either, which is why I didn't chime in with my opinion on whether this could ever happen in real life.
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

  24. #624
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Hey look at that...
    The experts basically said what I have been saying all along.

    GJ Tet.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  25. #625
    Xtreme Addict
    Join Date
    Jul 2007
    Posts
    1,488
    Quote Originally Posted by LordEC911 View Post
    Hey look at that...
    The experts basically said what I have been saying all along.

    GJ Tet.
    How do you figure?

Page 25 of 30 FirstFirst ... 1522232425262728 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •