Its pretty stupid that NVIDIA priced the GTX 590 higher than the 6990, both are on equal performance. As usual, NVIDIA's pricing on their products don't surprise me.
AMD Threadripper 12 core 1920x CPU OC at 4Ghz | ASUS ROG Zenith Extreme X399 motherboard | 32GB G.Skill Trident RGB 3200Mhz DDR4 RAM | Gigabyte 11GB GTX 1080 Ti Aorus Xtreme GPU | SilverStone Strider Platinum 1000W Power Supply | Crucial 1050GB MX300 SSD | 4TB Western Digital HDD | 60" Samsung JU7000 4K UHD TV at 3840x2160
Another GTX 590 bites the dust, this time on suggested Nvidia drivers 267.71
link: http://pclab.pl/news45334.html
It has nothing to do with drivers, the pcb and the vrm in particular sucks big time and is seriously underpowered and I'm going to go into the sensors reporting issue. How long do you think an electronic part can survive at 110 degrees celsius?
http://www.hardware.fr/articles/825-...e-gtx-590.html
not agian you need to use goolgle translate lol
about as long as the 6990? or the 5980?
the 59x2 cards i had would easy get over 120c on vrm area.
but yeah f these duel cards dont even want to see aibs wasting time to make them better. spend r&d time on 28mn
the worst duel franken launch card ever... it seems just like yesterday i was using rivatuner to add volts without a hicup on my 295/gx2 cards
heck my 7950x2 can take the added voltage better.
maybe better next time to not skimp ou to keep cost down on a "best card we ever made"product.
_________________
4870 X2s were the hottest dual GPU cards made and could run up to 120 degrees without going pop this easily.
this is definitely true, when i had issues with mounting my waterblock my vrm area would hit 124C and never had an issue with the card itself..and this was for more then a few runs until i figured out the mounting issue.
Bottom line is they went cheap on a certain area for god knows what reasons and more then likely will pay the price for it.
It seems to explain why they were "able" to make 590 shorter than 6990.They didnt engineer them with enough spare OC room.More beefed up VRM would take more space.
So I take it Futuremark is just to stressful on GPU's now ?
I thought that was the whole idea of it, stress the GPU's to max and see the score.
I wonder if Nvidia told them not to use Futurmark ?
From Guru3D
http://www.guru3d.com/article/geforce-gtx-590-review/7
Note: As of lately, there has been a lot of discussion using FurMark as stress test to measure power load. Furmark is so malicious on the GPU that it does not represent an objective power draw compared to really hefty gaming. If we take a very-harsh-on-the-GPU gaming title, then measure power consumption and then compare the very same with Furmark, the power consumption can be 50 to 100W higher on a high-end graphics card solely because of FurMark.
After long deliberation we decided to move away from FurMark and are now using a game like application which stresses the GPU 100% yet is much more representable of power consumption and heat levels coming from the GPU. We however are not disclosing what application that is as we do not want AMD/NVIDIA to 'optimize & monitor' our stress test whatsoever, for our objective reasons of course.
So now we have no idea on what they used to run there tests on so we cannot repeat or try to at home. That makes perfect sense to me, hide the results you get with software only they know what was used![]()
Last edited by Buckeye; 03-25-2011 at 08:25 AM.
Furmark =/= Futuremark
Bachelor of Science in Music Production 2016, Mid 2012 mack book Pro i7 2.6 8gb ram Nvidia 250m 1gb . Pro Tools , Logic X, Presonus one, Reaper, Garage Band. Cubase, Cakewalk.
It's ridiculous. How can anyone take any review seriously anymore, when the goal posts keep changing to suit the reviewers agenda. Does anyone really believe reviews are impartial and there is any sort of objectivity? My guess is that many of these reviewers have a (in)vested interest in public opinion.
A standardized set of review criteria, like the Phoronix test suite, would go a LONG way in creating a fair and open playing field that doesn't change on a per review basis. There is no way hardware vendors can sastisfy consumers when they are constantly bombarded with different demands (from entitled reviewers and enthusiasts alike) and these designs can be 5 years in the making. There is something inherently flawed with the entire system and surely it will reach a breaking point, that much seems pretty obvious. Just throwing a gazillion reviews at the problem, and taking the average isn't going to accomplish anything.
/rant![]()
Looking at the component numbers on a retail GTX 590 shows me VRMs that are rated for extended use at 175°, capacitors that are rated at 105°C, a 12-layer PCB with a copper core and chokes that are good to 125°. Seems perfectly suitable and in line with most other high-end GPUs.
The only potential issue I see is the 5-phase x 2 design that may not be sufficient for the current generated by highly increased core voltage.
Damn, these dying cards appear to be popping up decently. Considering this is extremesystems, this appears to be an unbuyable card due to these shortcomings. NV should of made the card long with better cooling.
Core i7 920@ 4.66ghz(H2O)
6gb OCZ platinum
4870x2 + 4890 in Trifire
2*640 WD Blacks
750GB Seagate.
I had 2 4870's X2 die and I made a thread on Guru3D that alot of people have the same issue. Even though they can run hot it isn't desirable and lead to early death of alot of cards. I always kept my fan speed cranked up and it keep it 65-75C on the GPU. The problem is that the VRM's had insufficient (or no) cooling. This causes them to go bad. I had both my original and RMA replacement have the VRMs go bad and there are dozens of people on Guru3D that had the same issue.
--Intel i5 3570k 4.4ghz (stock volts) - Corsair H100 - 6970 UL XFX 2GB - - Asrock Z77 Professional - 16GB Gskill 1866mhz - 2x90GB Agility 3 - WD640GB - 2xWD320GB - 2TB Samsung Spinpoint F4 - Audigy-- --NZXT Phantom - Samsung SATA DVD--(old systems Intel E8400 Wolfdale/Asus P45, AMD965BEC3 790X, Antec 180, Sapphire 4870 X2 (dead twice))
The reviewers cards that have blown up, are the reviewers fault for pumping 1.2v through it with stock cooling.
CPU: i7 2600K @5.2GHz
Cooling: Water
Ram: 2x2GB G.skill
Videocard: GTX 580
OS: Windows 7 x64
I am just going to dump some more fermi burning pics for the occasion.
![]()
Last edited by HotGore; 03-25-2011 at 04:38 PM.
I'd sum it up this way:
While nVidia did a fair job releasing a card that is (questionably though) as fast as 6990, that is shorter (at what cost) than the competition and is "qieter" (at the cost of dumping more heat inside the case thus heating everything else). All in all as the pricing suggests in the end the two cards are "equal" at the usual higher consumption on the green side I already got used to.
BUT, what pisses me off is that nvidia failed AGAIN on the hardware front. As it seems to me for quite some time now, nvidia opts for cheaper components, weaker design, and "fair enough" overall construction to save some money (and make more) on us. On people who made them who they are.
Sure, their driver team is a bit better on the game optimization front, and their drivers can generally be considered more stable, but I have never seen an ati driver being even remotely responsible for the death of their hardware (remember the fried nvidia cards with stopped fans?).
Everybody makes mistakes. But if someone makes the same "mistakes" (is it?) again and again it's definitely not good.
I'd really love to buy an nvidia product, I would really like them to deserve my money again but I am a hardware guy and on the hardware side of things there is no real choice for the past couple of years. Even in the days ati had the most powerhungry cards, they were constructed to withstand not just the normal operation.
The most sad thing about this all is that they generally do this to their high end. ...and do I want a burnt 700$ card to extend my collection of dead hardware? No.
Bookmarks