the output configuration should have been minidp+2xdldvi on the 5870 and 5850 as well.
Printable View
the output configuration should have been minidp+2xdldvi on the 5870 and 5850 as well.
As far as I know, occt and furmark use 100% of the processing power of the 4000 series... which is significantly higher than 100% of the processing power of the gt200 series. The issue is, the 4000 series hardware doesn't naturally make use of all that power... and the driver support doesn't make use of all that power... the gt200 series do however.
In theory, the 4890 vs gtx285, the 4890 would win everytime, but thats just not how it is.
http://anandtech.com/video/showdoc.aspx?i=3643&p=11
So apparently nvidia also does this.Quote:
That problem reared its head a lot for the RV770 in particular, with the rise in popularity of stress testing programs like FurMark and OCCT. Although stress testers on the CPU side are nothing new, FurMark and OCCT heralded a new generation of GPU stress testers that were extremely effective in generating a maximum load. Unfortunately for RV770, the maximum possible load and the TDP are pretty far apart, which becomes a problem since the VRMs used in a card only need to be spec’d to meet the TDP of a card plus some safety room. They don’t need to be able to meet whatever the true maximum load of a card can be, as it should never happen.
Why is this? AMD believes that the instruction streams generated by OCCT and FurMark are entirely unrealistic. They try to hit everything at once, and this is something that they don’t believe a game or even a GPGPU application would ever do. For this reason these programs are held in low regard by AMD, and in our discussions with them they referred to them as “power viruses”, a term that’s normally associated with malware. We don’t agree with the terminology, but in our testing we can’t disagree with AMD about the realism of their load – we can’t find anything that generates the same kind of loads as OCCT and FurMark.
Regardless of what AMD wants to call these stress testers, there was a real problem when they were run on RV770. The overcurrent situation they created was too much for the VRMs on many cards, and as a failsafe these cards would shut down to protect the VRMs. At a user level shutting down like this isn’t a very helpful failsafe mode. At a hardware level shutting down like this isn’t enough to protect the VRMs in all situations. Ultimately these programs were capable of permanently damaging RV770 cards, and AMD needed to do something about it. For RV770 they could use the drivers to throttle these programs; until Catalyst 9.8 they detected the program by name, and since 9.8 they detect the ratio of texture to ALU instructions (Ed: We’re told NVIDIA throttles similarly, but we don’t have a good control for testing this statement). This keeps RV770 safe, but it wasn’t good enough. It’s a hardware problem, the solution needs to be in hardware, particularly if anyone really did write a power virus in the future that the drivers couldn’t stop, in an attempt to break cards on a wide scale.
Thats what I was trying to say, its a hardware problem. No matter how unrealistic the software is...its inexcusable for software to permanently damage hardware.
I do agree in theory, but I don't think I would have told ATI to release a slower card as a result. If it is true that this will not be a problem with any games (or programs that use the GPU for useful work), then they're OK. Kind of like how most networks are heavily oversubscribed because a situation where they will see full use from all nodes just does not happen.
OTOH, that article that cegras brought up made a good point - how terrible would it be if someone wrote a virus that modified your video drivers to avoid shutdown and bypass that ratio check then stressed your hardware until it broke? Aside from a boot sector virus on a hard drive this is probably the only other realistic threat to hardware (and expensive hardware at that), and that is not a position I would like to be in if I were ATI. An nVidia fanboy with some programming skill could literally cost the company millions in RMA's (possibly tens of millions, I'm not sure on sales #'s).
True, but overclocking is by definition running the hardware beyond specification. BIOS is a bit different, but the damage comes from either bad software that causes the hardware to run out of spec or corruption in the software that doesn't enable the hardware to run.
What orangekiwii is talking about is hardware running at spec speeds still being damaged by software, and no, that should never happen.
I'm with orangekiwii in that no software should be able to break the hardware. If it can, it's a problem of the hardware being not solid enough.
The thing about OCing and BIOS flashing is a bit more obscure point (even if it's a point with actual foundation), because both are software with the goal of changing the default configuration and thus behaviour of the hardware, so it's more a matter about "is it right or wrong to allow the user the flexibility to stablish a different from default behaviour?". But Furmark, OCCT and every other sw you want to use/program that simply operates with the unmodified hardware shouldn't be able to break it.
That said, I also think that those are programs that force a situation that it's not even similar to real world usage conditions, so punishing real world usage because of it, is not right in my opinion.
Probably something such a protective check of current being used, to underclock/undervolt the core (instead of shutting down the whole thing) if dangerous levels are being met would be an ideal solution. If that can't happen with real world scenarios, you don't mind the performance loss when that is happening, but it would be an effective protection against situations that could physically damage the hardware, while not causing shut downs or other kinds of malfunctions.
Well, the situation is fixed now (apparently), so it's a moot point.
Yeah, hopefully it is fixed. :)
Still wanna see some naked pcb shots of the 5800x2...see if it is still a plx chip or not...probably is but I can hope!
plx pcie2.0 ??
5870x2 may just be my next dual gpu since voodoo5 :)
any info on the HD 5870 Six price?
:banana::banana::banana::banana: me that thing is way too long. The hell are the engineers smoking? You can make a PCB way shorter than that, even for the 5870x2. Shame on you ATi.
I wonder if you can get back half an inch with that cooler off and use a water block. Does the stock cooler extend past the PCB a little? Or is that just wishful thinking? Looks like I will have to relocate my res and redo my loop with this one.
http://i37.tinypic.com/2psiptf.jpg
That is HUGE!!! That's gotta be almost the length of an EATX motherboard, doesn't it??
1) that's like an eATX mobo
2) that's a small ass case
3) people that buy X2's have full towers anyways, if you don't then that's your problem.
EDIT: before some flames the comment about full tower... judging from the majority of people everyone here uses Full towers, and have side HTPC/SFF computers as projects for their rooms... All the cooling is at Full tower, or the larger mid towers. But I see lots of Stackers/TJ's/TT etc...
That's why I love my MM case, I don't have to worry about whether a card will fit or not. As far as the FC block situation, I will probably go back to GPU only cooling, I am tired of getting expensive blocks that cannot be used on the next generation cards and since I go through cards quite often, it gets really expensive.
Either way... I'll figure a way to fit it into a SFF case. Length doesn't bother me one bit.
It's not that bad. I run mine @800 with the Accellero and case temp didn't go up by more than 3 to 5C at 75% fan, which is still insanely quiet. But a HS with a full shroud would definitely be better. Unless the old Accellero can work on the new card, I won't be upgrading till the new one comes out. That'd give it enough time to come down in price any how AND for drivers to get moving. God the drivers were pissing me off for the 4800 series.