I don't see the point of this, everything has a limit, you push it enough, it crashs, am sure you could produce the same on a nVidia card if you push enough amps or whatever through it.
Meh.
It's interesting that nobody has asked what exactly the test is stressing. The reason games with "complex" shaders don't stress hardware as much is that various functional units are often idle waiting on high latency memory or texturing operations.
If Tetedeiench's test is very math heavy with with high shader utilization for extended periods of time that should throw up a red flag for any sort of GPGPU applications. All of the excuses about games not stressing hardware are bollocks as it's trivial to whip up an OpenCL or CS application that runs full tilt on the shader core.
Tetedeiench, you wouldn't be willing to share your source code would you?
lets not mention nv
it crashed all three of my 4890's....but hell i hate the prime/furmark stuff anyways
_________________
cpu or gpu must pass ANY stress test at stock speeds no matter what. As long as it is not drivers, system or test itself that is flawed![]()
...
Can you imagine AMD telling some researcher not to optimize his algorithm for maximum utilization because the hardware isn't meant for that? But we really can't say anything for sure unless we see the shader code. Because it might be doing something very inefficient that has no performance or IQ benefit.
I think there is definitely a design flaw or something about the 4890.
When I bench 3dmark06. I can run the 4890 at 1000 mhz(this is with 300 cfm of air directly on top mind you and 100 percent fan setting). When I run 3dmark 05 I can run it at 975, when I run 3dmark03, I can't even run it at 945.
I don't see why there is so much variability in the clocks of these card. I think stability is definitely an issue when there is such variation in runable bench speeds. AMD needs to stop this marketing propaganda, e.g anandtech overclock extravaganza, that these cards are so bulletproof for overclocking, when they might have potential issues in the future even a stock speed.
I think these type of tests are important because they give show how well a card is built for the long term. I have used this analogy before, but I akin it to the test of elevators. We don't test elevators using only it rated weight. They test way way beyond that to ensure longterm strength. If something can run at 150% percent capacity then running at its designed specs will be a cake walk and thus running at that speed for a couple years will be ensured.
I am sure there is a reason company like ATI exclusive companies like powercolor have reduced their warranty times from lifetime to 1 year. The engineers are the best guys at knowing this type of stuff. Diamonds another company that offers 1 year warranty. I think AMD cards are not built for longterm lifetime warranties. Sapphire has by far the best warranty as an AMD exclusive(two or three years), and they are famous for their bad RMA service.
Interesting. You did run the test in FullScreen mode, did you ?
I wonder if Sapphire did use the reference design VRM for its card... would be an interesting thing to look for.
I'm going to sleep (it's more than time for me to do so), i'll keep looking at this thread tomorrow. Please don't start a flame war, i'm just trying to get this thing sorted out, and to know what's going on...
yea, trinibwoy, and maybe I want to use OCCT as my screensaverIf it cant run that then I return my product back as faulty
![]()
...
Before going to sleep, i have to answer that : sorry, but the answer is no.
I don't think there are people skilled enough to analyze the effect produce on the die of a GPU by a particular shader instruction and how that will affect the crash we're encountering.
Don't you think ?
I've always kept my code for myself for the following reasons :
- I like to know where my code is used, for what purpose, by whom
- I don't want to see branches popping everywhere
- I don't want comments on my way of coding
So sorry, the answer is no![]()
So a new power virus...
Originally Posted by motown_steve
Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.
Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.
"Fanbouyism is a disease we all carry but most have immune systems that keep it at bay. However when coupled with a bad dose of ignorance and Low IQ numbers, this disease can be accelerated out of control to boast insane amounts of irradic, Defensive, or Aggressive behaviour and unexplainable devotion to a product or label whether or not that item is truly deserving" -DR Ima Noober, June 1, 2003
Asus P5QL-Pro
Kinston HyperX 4x2 gigs 5-5-515@ 1120 mghz
Q6600 - 3.33 GHZ
2x Sapphire 4870 1 gig
Galaxie 850 PSU- this thing rocks
audigy 2zs gamer
4x Freezones to cool Proc. Idle @ 10c- Full load @29c(soon to be 5x in my new twelve hundred case)
Vista ultimate 64
If I (as well as others) are not experiencing any problems I really don't see the need for it at this time.
Yet the games and programs currently available don't provide this problem. So the question in your case becomes: Do I use my video card for that particular program or for the games I play. My answer is for the games I play. If a person is not experiencing any problems I simply don't see the need for it.
Last edited by Eastcoasthandle; 05-19-2009 at 02:41 PM.
[SIGPIC][/SIGPIC]
Then occt doesn't apply to you.
GPUpi might be better suited.
Its for those who use Linpack , superpi 32MB x2 , prime95 32bit/x64, wprime, folding@home , etc
Since you don't use OCCT gpu test, you don't have a problem, thus why complain?
What this shows, is that if developers coded to make 100% use of the shaders available on the 4870 / 4890 , the cards would crash.
At least thats my opinion
Last edited by Greg83; 05-19-2009 at 02:49 PM.
No, I've been asked a few times to use it. I am answering why it's not necessary. Anyone can use their stock (for example) CPU and GPU to run those applications. Obtain a result and may or may not reflect your expectations in other programs and games used.
I don't have a compliant, but have asked what purpose does this serve if folk using any video card currently have no issues with the programs/games (Super PI, FAH, etc) they use.
Based on what exactly? Has this been tested by a gaming developer already? If so, what game is that?What this shows, is that if developers coded to make 100% use of the shaders available on the 4870 / 4890 , the cards would crash.
At least thats my opinion
Last edited by Eastcoasthandle; 05-19-2009 at 02:52 PM.
[SIGPIC][/SIGPIC]
It can help explain, why some are stuck with what can be considered an un-satisfactory overclock
If you believe that any game out there currently, makes 100% use of these gpu's. I will never be able to convince you of anything new, thus arguing with you is 100% pointless, much like how you feel this program is.
What I am saying is, we may never see more performance being unlocked through the optimization of code, or more efficient F@H GPU cores to make use of all these gpu's have to offer, due to this new found flaw.
Last edited by Greg83; 05-19-2009 at 02:57 PM.
But I am going to end it here. This back and forth really hasn't lead to anything constructive concerning this use, enjoy!
[SIGPIC][/SIGPIC]
A theory , has no facts.
This has multiple facts now.
FurMark and OCCT
Lets not turn this into a co-incidence w/o proof now.
Bookmarks