Page 7 of 18 FirstFirst ... 4567891017 ... LastLast
Results 151 to 175 of 449

Thread: GTX 590 reviews

  1. #151
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by regenade View Post
    Why is SKYMTL so much behind Nvidia always :-P
    Did you read the review? I mean actually READ it? I thought I was quite transparent about the card's shortcomings.

    As for the "puff of smoke" issue, I am taking a wait and see approach rather than running around like my hair is on fire.

  2. #152
    Xtreme Addict Chrono Detector's Avatar
    Join Date
    May 2009
    Posts
    1,142
    Its pretty stupid that NVIDIA priced the GTX 590 higher than the 6990, both are on equal performance. As usual, NVIDIA's pricing on their products don't surprise me.
    AMD Threadripper 12 core 1920x CPU OC at 4Ghz | ASUS ROG Zenith Extreme X399 motherboard | 32GB G.Skill Trident RGB 3200Mhz DDR4 RAM | Gigabyte 11GB GTX 1080 Ti Aorus Xtreme GPU | SilverStone Strider Platinum 1000W Power Supply | Crucial 1050GB MX300 SSD | 4TB Western Digital HDD | 60" Samsung JU7000 4K UHD TV at 3840x2160

  3. #153
    Banned
    Join Date
    Sep 2009
    Location
    Past
    Posts
    447
    Quote Originally Posted by bamtan2 View Post
    Is that right. Any sensible person. Is that what you're doing?
    I never use reference cooling.Either i use third party one or buy a vendor modified one so it can stay cool n quiet.
    Is that weird ?
    And i own mid range gpus.On high end ones i would think its a no brainer.

  4. #154
    Another GTX 590 bites the dust, this time on suggested Nvidia drivers 267.71

    link: http://pclab.pl/news45334.html

  5. #155
    Xtreme Member
    Join Date
    Jul 2009
    Posts
    319
    It has nothing to do with drivers, the pcb and the vrm in particular sucks big time and is seriously underpowered and I'm going to go into the sensors reporting issue. How long do you think an electronic part can survive at 110 degrees celsius?



    http://www.hardware.fr/articles/825-...e-gtx-590.html
    Quote Originally Posted by Cleatus View Post
    Just cause you pour syrup over crap dont make it pancakes

  6. #156
    Xtreme Addict
    Join Date
    Oct 2006
    Location
    new jersey
    Posts
    1,100
    Quote Originally Posted by Shadov View Post
    Another GTX 590 bites the dust, this time on suggested Nvidia drivers 267.71

    link: http://pclab.pl/news45334.html

    not agian you need to use goolgle translate lol
    Quote Originally Posted by halfwaythere View Post
    It has nothing to do with drivers, the pcb and the vrm in particular sucks big time and is seriously underpowered and I'm going to go into the sensors reporting issue. How long do you think an electronic part can survive at 110 degrees celsius?

    [http://www.hardware.fr/articles/825-...e-gtx-590.html
    about as long as the 6990? or the 5980?
    the 59x2 cards i had would easy get over 120c on vrm area.


    but yeah f these duel cards dont even want to see aibs wasting time to make them better. spend r&d time on 28mn
    the worst duel franken launch card ever... it seems just like yesterday i was using rivatuner to add volts without a hicup on my 295/gx2 cards
    heck my 7950x2 can take the added voltage better.
    maybe better next time to not skimp ou to keep cost down on a "best card we ever made"product.
    _________________

  7. #157
    Xtreme Mentor
    Join Date
    Feb 2008
    Posts
    2,671
    4870 X2s were the hottest dual GPU cards made and could run up to 120 degrees without going pop this easily.

  8. #158
    Xtreme Enthusiast
    Join Date
    Nov 2009
    Posts
    511
    Quote Originally Posted by bhavv View Post
    4870 X2s were the hottest dual GPU cards made and could run up to 120 degrees without going pop this easily.
    this is definitely true, when i had issues with mounting my waterblock my vrm area would hit 124C and never had an issue with the card itself..and this was for more then a few runs until i figured out the mounting issue.

    Bottom line is they went cheap on a certain area for god knows what reasons and more then likely will pay the price for it.

  9. #159
    Banned
    Join Date
    Sep 2009
    Location
    Past
    Posts
    447
    It seems to explain why they were "able" to make 590 shorter than 6990.They didnt engineer them with enough spare OC room.More beefed up VRM would take more space.

  10. #160
    Xtreme Member
    Join Date
    Sep 2009
    Location
    London
    Posts
    247
    Quote Originally Posted by XRL8 View Post
    It seems to explain why they were "able" to make 590 shorter than 6990.They didnt engineer them with enough spare OC room.More beefed up VRM would take more space.

    If thats the case, so much for innovation!

  11. #161
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,568
    So I take it Futuremark is just to stressful on GPU's now ?
    I thought that was the whole idea of it, stress the GPU's to max and see the score.
    I wonder if Nvidia told them not to use Futurmark ?

    From Guru3D
    http://www.guru3d.com/article/geforce-gtx-590-review/7

    Note: As of lately, there has been a lot of discussion using FurMark as stress test to measure power load. Furmark is so malicious on the GPU that it does not represent an objective power draw compared to really hefty gaming. If we take a very-harsh-on-the-GPU gaming title, then measure power consumption and then compare the very same with Furmark, the power consumption can be 50 to 100W higher on a high-end graphics card solely because of FurMark.

    After long deliberation we decided to move away from FurMark and are now using a game like application which stresses the GPU 100% yet is much more representable of power consumption and heat levels coming from the GPU. We however are not disclosing what application that is as we do not want AMD/NVIDIA to 'optimize & monitor' our stress test whatsoever, for our objective reasons of course.

    So now we have no idea on what they used to run there tests on so we cannot repeat or try to at home. That makes perfect sense to me, hide the results you get with software only they know what was used
    Last edited by Buckeye; 03-25-2011 at 08:25 AM.

  12. #162
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Portugal
    Posts
    233
    Furmark =/= Futuremark

  13. #163
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,568
    Quote Originally Posted by RSC View Post
    Furmark =/= Futuremark
    Ah yes, my mistake.

    Still what I said holds true

  14. #164
    Xtreme Member
    Join Date
    Jul 2009
    Posts
    319
    The 4870x2 was an engineering masterpiece and can't be compared with the 590 which uses sub-high end standard components. Some cards are built to run at high temps while others just crash and go pop.
    Quote Originally Posted by Cleatus View Post
    Just cause you pour syrup over crap dont make it pancakes

  15. #165
    Xtreme Enthusiast
    Join Date
    Feb 2007
    Location
    deviant art
    Posts
    512
    Quote Originally Posted by vern View Post

    No OC card
    sweet. i know that these are limited for power consumption, I wonder if somone unlocked its potential and fed it all the wattage it could take if it would be a massive boost
    Bachelor of Science in Music Production 2016, Mid 2012 mack book Pro i7 2.6 8gb ram Nvidia 250m 1gb . Pro Tools , Logic X, Presonus one, Reaper, Garage Band. Cubase, Cakewalk.

  16. #166
    Xtreme Enthusiast
    Join Date
    Feb 2005
    Posts
    970
    Quote Originally Posted by Buckeye View Post
    So I take it Futuremark is just to stressful on GPU's now ?
    I thought that was the whole idea of it, stress the GPU's to max and see the score.
    I wonder if Nvidia told them not to use Futurmark ?

    From Guru3D
    http://www.guru3d.com/article/geforce-gtx-590-review/7

    Note: As of lately, there has been a lot of discussion using FurMark as stress test to measure power load. Furmark is so malicious on the GPU that it does not represent an objective power draw compared to really hefty gaming. If we take a very-harsh-on-the-GPU gaming title, then measure power consumption and then compare the very same with Furmark, the power consumption can be 50 to 100W higher on a high-end graphics card solely because of FurMark.

    After long deliberation we decided to move away from FurMark and are now using a game like application which stresses the GPU 100% yet is much more representable of power consumption and heat levels coming from the GPU. We however are not disclosing what application that is as we do not want AMD/NVIDIA to 'optimize & monitor' our stress test whatsoever, for our objective reasons of course.

    So now we have no idea on what they used to run there tests on so we cannot repeat or try to at home. That makes perfect sense to me, hide the results you get with software only they know what was used
    It's ridiculous. How can anyone take any review seriously anymore, when the goal posts keep changing to suit the reviewers agenda. Does anyone really believe reviews are impartial and there is any sort of objectivity? My guess is that many of these reviewers have a (in)vested interest in public opinion.

    A standardized set of review criteria, like the Phoronix test suite, would go a LONG way in creating a fair and open playing field that doesn't change on a per review basis. There is no way hardware vendors can sastisfy consumers when they are constantly bombarded with different demands (from entitled reviewers and enthusiasts alike) and these designs can be 5 years in the making. There is something inherently flawed with the entire system and surely it will reach a breaking point, that much seems pretty obvious. Just throwing a gazillion reviews at the problem, and taking the average isn't going to accomplish anything.

    /rant

  17. #167
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by halfwaythere View Post
    The 4870x2 was an engineering masterpiece and can't be compared with the 590 which uses sub-high end standard components. Some cards are built to run at high temps while others just crash and go pop.
    Looking at the component numbers on a retail GTX 590 shows me VRMs that are rated for extended use at 175°, capacitors that are rated at 105°C, a 12-layer PCB with a copper core and chokes that are good to 125°. Seems perfectly suitable and in line with most other high-end GPUs.

    The only potential issue I see is the 5-phase x 2 design that may not be sufficient for the current generated by highly increased core voltage.

  18. #168
    Xtreme Addict
    Join Date
    Jan 2009
    Location
    Near Venice as they say
    Posts
    1,314
    Quote Originally Posted by mycoolcar View Post
    TRUE Lapped - Intel Core i7 2600k 4,7Ghz - ASRock P67 Extreme4 Gen3 - Nvidia GTX 1080 FE - 16Gb Crucial 2133 Mhz CL9 1,51v - Crucial M4 256Gb - Crucial MX300 1050Gb - Corsair AX850 - Fractal Define R3


  19. #169
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Damn, these dying cards appear to be popping up decently. Considering this is extremesystems, this appears to be an unbuyable card due to these shortcomings. NV should of made the card long with better cooling.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  20. #170
    Xtreme Supporter
    Join Date
    Jan 2008
    Location
    USA
    Posts
    654
    Quote Originally Posted by bhavv View Post
    4870 X2s were the hottest dual GPU cards made and could run up to 120 degrees without going pop this easily.
    My little space heater brings back memories.

  21. #171
    Xtreme Enthusiast
    Join Date
    Aug 2008
    Posts
    577
    Quote Originally Posted by bhavv View Post
    4870 X2s were the hottest dual GPU cards made and could run up to 120 degrees without going pop this easily.
    I had 2 4870's X2 die and I made a thread on Guru3D that alot of people have the same issue. Even though they can run hot it isn't desirable and lead to early death of alot of cards. I always kept my fan speed cranked up and it keep it 65-75C on the GPU. The problem is that the VRM's had insufficient (or no) cooling. This causes them to go bad. I had both my original and RMA replacement have the VRMs go bad and there are dozens of people on Guru3D that had the same issue.
    --Intel i5 3570k 4.4ghz (stock volts) - Corsair H100 - 6970 UL XFX 2GB - - Asrock Z77 Professional - 16GB Gskill 1866mhz - 2x90GB Agility 3 - WD640GB - 2xWD320GB - 2TB Samsung Spinpoint F4 - Audigy-- --NZXT Phantom - Samsung SATA DVD--(old systems Intel E8400 Wolfdale/Asus P45, AMD965BEC3 790X, Antec 180, Sapphire 4870 X2 (dead twice))

  22. #172
    Xtreme Enthusiast
    Join Date
    Oct 2006
    Location
    AU
    Posts
    510
    The reviewers cards that have blown up, are the reviewers fault for pumping 1.2v through it with stock cooling.
    CPU: i7 2600K @5.2GHz
    Cooling: Water
    Ram: 2x2GB G.skill
    Videocard: GTX 580
    OS: Windows 7 x64

  23. #173
    Xtremely Kool
    Join Date
    Jul 2006
    Location
    UK
    Posts
    1,875
    Quote Originally Posted by X.T.R.E.M.E_ICE View Post
    The reviewers cards that have blown up, are the reviewers fault for pumping 1.2v through it with stock cooling.
    Benchmark running was 3DMark 11. Settings used during card failure:
    GPU Clock @ 772 MHz
    GPU VCore @ 1,025 V
    SweClockers.com

  24. #174
    Xtreme Enthusiast
    Join Date
    Aug 2007
    Posts
    668
    I am just going to dump some more fermi burning pics for the occasion.







    Last edited by HotGore; 03-25-2011 at 04:38 PM.

  25. #175
    Registered User
    Join Date
    Feb 2006
    Posts
    76
    I'd sum it up this way:

    While nVidia did a fair job releasing a card that is (questionably though) as fast as 6990, that is shorter (at what cost) than the competition and is "qieter" (at the cost of dumping more heat inside the case thus heating everything else). All in all as the pricing suggests in the end the two cards are "equal" at the usual higher consumption on the green side I already got used to.

    BUT, what pisses me off is that nvidia failed AGAIN on the hardware front. As it seems to me for quite some time now, nvidia opts for cheaper components, weaker design, and "fair enough" overall construction to save some money (and make more) on us. On people who made them who they are.

    Sure, their driver team is a bit better on the game optimization front, and their drivers can generally be considered more stable, but I have never seen an ati driver being even remotely responsible for the death of their hardware (remember the fried nvidia cards with stopped fans?).

    Everybody makes mistakes. But if someone makes the same "mistakes" (is it?) again and again it's definitely not good.

    I'd really love to buy an nvidia product, I would really like them to deserve my money again but I am a hardware guy and on the hardware side of things there is no real choice for the past couple of years. Even in the days ati had the most powerhungry cards, they were constructed to withstand not just the normal operation.

    The most sad thing about this all is that they generally do this to their high end. ...and do I want a burnt 700$ card to extend my collection of dead hardware? No.

Page 7 of 18 FirstFirst ... 4567891017 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •