Page 12 of 42 FirstFirst ... 2910111213141522 ... LastLast
Results 276 to 300 of 1028

Thread: NVIDIA GTX 595 (picture+Details)

  1. #276
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by saaya View Post
    Will pcie 3.0 bring moar powa? Does anybody know?
    Not that I know of.

    I believe boards which fell under the HPCES v1.0 specification (remember, the PCI-E 2.0 base spec never included 225W - 300W inputs) will now be incorporated directly into the PCI-E 3.0 base spec. However, there weren't any increases in power delivery limitations.

    This basically means that companies producing higher-end cards will no longer need certifications for BOTH HPCES and PCI-E but the maximum remains 300W.

  2. #277
    Xtreme Guru
    Join Date
    Dec 2003
    Location
    Vancouver, Canada
    Posts
    3,858
    Who cares what the power consumption is? More important are the cost and performance.
    i5 750 4.20GHz @ NH-D14 | 8GB | P7P55DLE | 8800U | Indilinx SSD + Samsung F3 | HAF922 + CM750W
    Past: Q6600 @ 3.60 E6400 @ 3.60 | E6300 @ 3.40 | O165 @ 2.90 | X2 4400+ @ 2.80 | X2 3800+ @ 2.70 | VE 3200+ @ 2.80 | WI 3200+ @ 2.75 | WI 3000+ no IHS @ 2.72 | TBB 1700+ @ 2.60 | XP-M 2500+ @ 2.63 | NC 2800+ @ 2.40 | AB 1.60GHz @ 2.60
    Quote Originally Posted by CompGeek
    The US is the only country that doesn't use [nuclear weapons] to terrorize other countries. The US is based on Christian values, unlike any other country in the world. Granted we are straying from our Christian heritage, but we still have a freedom aimed diplomatic stance.

  3. #278
    Diablo 3! Who's Excited?
    Join Date
    May 2005
    Location
    Boulder, Colorado
    Posts
    9,412
    Quote Originally Posted by IvanAndreevich View Post
    Who cares what the power consumption is? More important are the cost and performance.
    We have standards for a reason.

  4. #279
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by [XC] gomeler View Post
    We have standards for a reason.
    Makes it easier to spot outstanding devices?
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  5. #280
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    awesome article on xbitlabs testing 560 SLI
    http://www.xbitlabs.com/articles/vid...i_7.html#sect0

    At the highest resolution the SLI configuration is comparable to the GeForce GTX 570 SLI tandem we tested earlier in scalability. Its performance increases by an average 70% over that of the single card. The maximum performance boost is as high as 105%. Its advantage over the GeForce GTX 580 and Radeon HD 5970 remains the same at 19-21%, though. Compared to that, the GeForce GTX 570 SLI tandem enjoyed an advantage of 42-43% over those cards. Thus, the GeForce GTX 560 Ti SLI delivers comfortable performance where the single GeForce GTX 580 can do the same and even fails in BattleForge and S.T.A.L.K.E.R.: Call of Pripyat.

    So, a dual-GF114 graphics card wouldn’t be competitive as a flagship solution. It might be fast but only about as fast as an ordinary single-chip GeForce GTX 580.
    so nvidia is in a tight spot...
    dual 114 scales well, costs less and runs cooler than dual 110, BUT it reaches a perf spot their 580 already conquered months ago...

    so they HAVE to go dual gf110... which means heat heat heat and some more heat... but how are they going to take care of the memory width issue? and where will they put all the memory chips? dual pcb again? phew!

  6. #281
    Xtreme Mentor
    Join Date
    Mar 2006
    Location
    Evje, Norway
    Posts
    3,419
    *Cough* 1GB vram *Cough*

    But still, i agree, dual GF114 aint fast enough.
    Quote Originally Posted by iddqd View Post
    Not to be outdone by rival ATi, nVidia's going to offer its own drivers on EA Download Manager.
    X2 555 @ B55 @ 4050 1.4v, NB @ 2700 1.35v Fuzion V1
    Gigabyte 890gpa-ud3h v2.1
    HD6950 2GB swiftech MCW60 @ 1000mhz, 1.168v 1515mhz memory
    Corsair Vengeance 2x4GB 1866 cas 9 @ 1800 8.9.8.27.41 1T 110ns 1.605v
    C300 64GB, 2X Seagate barracuda green LP 2TB, Essence STX, Zalman ZM750-HP
    DDC 3.2/petras, PA120.3 ek-res400, Stackers STC-01,
    Dell U2412m, G110, G9x, Razer Scarab

  7. #282
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    i think they are... its just that gf110 is so fast, it gets there all on its own

  8. #283
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    ROMANIA
    Posts
    687
    Unfortunatelly from what i know GF104/114 supports maximum 256bytes bus memory, i wonder what more boost a 320bytes bus would do.
    Could be just enough to surpass GTX 580/5970 by 30%.
    i5 2500K@ 4.5Ghz
    Asrock P67 PRO3


    P55 PRO & i5 750
    http://valid.canardpc.com/show_oc.php?id=966385
    239 BCKL validation on cold air
    http://valid.canardpc.com/show_oc.php?id=966536
    Almost 5hgz , air.

  9. #284
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by saaya View Post
    awesome article on xbitlabs testing 560 SLI
    http://www.xbitlabs.com/articles/vid...i_7.html#sect0


    so nvidia is in a tight spot...
    dual 114 scales well, costs less and runs cooler than dual 110, BUT it reaches a perf spot their 580 already conquered months ago...

    so they HAVE to go dual gf110... which means heat heat heat and some more heat... but how are they going to take care of the memory width issue? and where will they put all the memory chips? dual pcb again? phew!
    I think it would be more useful as a card, if NV clocked a gf110 card up to 1ghz mhz(950 has been done with some stock coolers) and used 1.4ghz memory and used 128TMU as original specified as texture and memory bandwidth seem to be a limiting factor on the gtx 580. You would have a card that is undoubtly faster than a 5970 by 15% or so and be a single chip.

    Hardwarecanucks clocked the gtx 580 up to 950 and it used 40 more watts

    http://www.hardwarecanucks.com/forum...dition-13.html

    so 1 ghz seems possible with great binning and a better cooler. So a card that is about 25-30% faster than a gtx 580, with 3gb of ddr 5 at 650 dollars would be fair I think.

    For me I would rather have a single chip that is 85% of the speed, than a dual chip that is 100%. AMD driver support for the 4870x2 is non-existance(and their performance is getting worse with newer drivers) and I am not sure NV is better for their dual cards.
    Last edited by tajoh111; 02-07-2011 at 11:14 PM.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  10. #285
    Xtreme Enthusiast
    Join Date
    Nov 2004
    Location
    Denmark
    Posts
    817
    So what everybody's saying is that both companies will pretty much be stuck performance-wise until they can finally transistion to a newer transistor-tech?
    I don't see them re-designing chips on the current 40nm node, so I guess they'll have to suck it up, that they've hit the ceiling... And if your current single-chip solution is pretty much using all of the 300w max, and it's as efficient as possible, a handicapped dual-chip card just doesn't make sense, as you would have to cut the chip in half performance-wise to fit two of them on a card... And then you're just back to square zero...
    I'm looking forward to seeing what NVIDIA and AMD will do with their dual-chip solutions, but I think the reason we haven't seen them yet is the same: They can't make them perform admirably better than their single-chip counterparts...

    Best Regards
    Silverstone RAVEN RV02|
    Core i5 2500K@4.4GHz, 1,300V|
    Corsair A70|ASUS P67 Sabertooth|Creative X-Fi Titanium Fatal1ty|
    Corsair Dominator DDR1600 4x4096MB@DDR3-1600@1.65V|Sapphire HD7970 3GB 1075/1475MHz|
    Corsair Force F120 120GB SSD SATA-II, WD Caviar Black 2x1TB SATA-II 32mb, Hitatchi 320GB SATA-II 16mb|Silverstone DA750 750w PSU|

  11. #286
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Yeah 4870x2 was terrible when it came out driver wise... crisis was an artifact adventure... no idea if its better now
    In my experience nv is doing a slightly better job with dual gpu drivers but I'd still prefer a single gpu, hill

    But about tdp... when you push silicon to the limit you get 10% extra perf for maybe 20% more power... IF that's the case with gf110 then maybe a dual gpu card is able to get more performance out of the 300 w limit... if NVIDIA can pull this off I'd be very impressed!

  12. #287
    Xtreme Enthusiast
    Join Date
    Jan 2008
    Posts
    743
    Quote Originally Posted by SKYMTL View Post
    Not that I know of.

    I believe boards which fell under the HPCES v1.0 specification (remember, the PCI-E 2.0 base spec never included 225W - 300W inputs) will now be incorporated directly into the PCI-E 3.0 base spec. However, there weren't any increases in power delivery limitations.

    This basically means that companies producing higher-end cards will no longer need certifications for BOTH HPCES and PCI-E but the maximum remains 300W.
    Damn that's disappointing. I was hoping to see Nvdia/AMD not have to tip toe around the 300W limit to unleash some real future video card beasts. I know there are a few limited exceptions like the past limited edition dual GPU from NVidia, but not appealing from a financial point of view.
    Last edited by kadozer; 02-07-2011 at 11:50 PM.

  13. #288
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    yeah i dont get it either...
    its not like the 300W limit actually limits power consumption of systems anyways...
    people who want extra power get up to 4 300W cards...

    actually it would be more power efficient to allow a single vga with 400W then, cause then people who want ultra performance would run 2x400 instead of 3x300 or 4x300...
    and people who want this kind of power overclock anyways, so then the limitation doesnt matter at all...

    300W is clearly not enough and limiting cards... so im surprised why nvidia and amd are not pushing for more...
    think about it, if there wouldnt have been a 300W limit then nvidia could actually have launched a GTX480 Ultra with 512sps and a 350W tdp and a massive heatsink...

    and ati could have launched the 5970 at full 5870 clocks instead of clocking it down...

    ah well... if ati and nvidia dont care...

  14. #289
    Xtreme Member
    Join Date
    Nov 2008
    Location
    Russia
    Posts
    127
    Quote Originally Posted by Sn0wm@n View Post
    ok so the heck with the standards then right ... no standards = no compatible parts for us ... proprietary solutions from every company ... and this makes them cost alot more etc...


    but hey your right .. having limits on things to a certain degree is bad ....


    if you do think so im sure you must have an asus ares ... and an asus mars gpu or two sitting somewhere right
    Yeah they must remove this stupid limit for practical reasons.
    Overclocked cards consume more than 300W anyway....
    CPU: i7 930 3002A648 (Venomous X+Ultra Kaze 2000rpm)
    MB: Asus P6X58D Premium Bios 1501
    Video: Saphire HD 5970
    Memory: Corsair Dominator TR3X6G1600C8D @ 1600MHz
    X-Fi Extreme Gamer Fatality Pro
    HDD: Vertex 2 100GB+WD Caviar Black 1Tb
    Case: TT Kandalf LCS Silver
    Power: CM Real Power Pro 1000W
    Display: Samsung 275T
    Input: Logitech G25/G15/G500/Z-2300

  15. #290
    Registered User
    Join Date
    Feb 2010
    Location
    Hungary, Budapest
    Posts
    31

    300w

    From what little I know the reason that they won't raise the 300W limit is the OEM market. 300W heat is alreadyy pushing the nevelope regarding the amount of heat that a normal medium sized case can handle, not to mention that it is per PCI-E slot. OEM certifications (user safety, fire protection, shock protection, whatever) would be impossible to get with higher limits. Just imagine if soemone burns his/her fingers while touching the outside of the case which can get really hot with 1 or 2 "heaters" inside.

    Also most OEM machine VRMs and other stuff are built to these limits to remain cheap.

    We xtreme guys are only 1% or even less of the total market, the big business is in the OEM market, they are the ones ordering thousands of parts.

    Also as far as I can tell for both AMD and Nvidia the discrete graphics card market's incomes are decreasing each and every year and now with the intorduction of APUs like Fusion it will get even smaller as 90% of this ever limited income is coming in from low/medium end cards, not from the beasts that most of us here own.

    So unless the OEMs change their minds or AMD/Nvidia seriously lower there prices (neither likely to happen) the PCI-E powerlimit will remain.
    Last edited by csatahajos; 02-08-2011 at 04:16 AM. Reason: edited for grammar
    2 x ES Intel Xeon E5 2680v2 @8cores/16threads/25MB L3 cache @ 3,4 Ghz | Asus Z9PE-D8 WS | 6x4GB Kingston HyperX Predator | EVGA Geforce 980Ti HydroCopper @1350Mhz | HyperX 240GB PCi SSD boot + Samsung 840 SSD 2 x 256GB | 25 TB RAID5 (WD+Seagate) Dell T630 server | Corsair AXi1500 | CaseLabs SMA8 gunmetal - watercooling: 2 x EK Supreme LTX CPU + Nickel-Acetal full cover VGA | LG38UC99+Dell U3014 + 2 X Dell U2715

  16. #291
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Yeah, the heat restriction makes sense.
    Quote Originally Posted by saaya View Post
    so they HAVE to go dual gf110... which means heat heat heat and some more heat... but how are they going to take care of the memory width issue? and where will they put all the memory chips? dual pcb again? phew!
    Going to be one hot card, yep. And memory chips? Just put them on the back of the PCB... The PCB is going to be bigger than 580's one, anyway...
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  17. #292
    Xtreme Mentor
    Join Date
    Jun 2008
    Location
    France - Bx
    Posts
    2,601
    Quote Originally Posted by saaya View Post
    and where will they put all the memory chips? dual pcb again? phew!




    http://bbs.expreview.com/thread-39523-1-1.html

    edit : already posted by cold2010 page 3, sorry
    Last edited by Olivon; 02-08-2011 at 04:33 AM.

  18. #293
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by Olivon View Post




    http://bbs.expreview.com/thread-39523-1-1.html

    edit : already posted by cold2010 page 3, sorry
    That's gonna be an expensive pcb I think...
    Must be many layers and lots of smd parts on both sides...
    Wonder how much this will cost... at least 699 probably 999

  19. #294
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    There is a way for NVIDIA and AMD to overcome issues with the PCI-E certification: allow their board partners to do the leg work. We've seen this done quite successfully with the ASUS Ares, ASUS Mars, Sapphire HD 4850 X2, XFX HD 5970 Black Edition, etc.

    All they have to do is send a set of design parameters and specifications to companies who are able to implement their own solutions from the ground up and let them deal with the rest. It's a money-saving proposition, allows NVIDIA / AMD to introduce class-leading products and skirts the certification OEMs want attached to products.

  20. #295
    Xtreme Mentor
    Join Date
    Jun 2008
    Location
    France - Bx
    Posts
    2,601
    I won't be surprized if Asus brings an Ares like with 2 GF110, full specs, with 6 GB ram and 3X8pin power supply

  21. #296
    Xtreme Addict
    Join Date
    Nov 2003
    Location
    Oslo, Norway
    Posts
    1,218
    Quote Originally Posted by Olivon View Post
    I won't be surprized if Asus brings an Ares like with 2 GF110, full specs, with 6 GB ram and 3X8pin power supply
    loving the idea

  22. #297
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    exactly .. asus evga etc.. are the ones who are suposed to go beyond the limitations with their special omgwtfbbq edition gpus .. not amd or nvidia ...
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

  23. #298
    Xtreme Mentor
    Join Date
    Jan 2009
    Location
    Oslo - Norway
    Posts
    2,879
    Since partners got total freedom to do whatever they want with GTX560, is there any hope to see a tipple-GTX560 on a single PCB from some partners?. That could beat the hell out of 6990 too, LOL.

    ASUS P8P67 Deluxe (BIOS 1305)
    2600K @4.5GHz 1.27v , 1 hour Prime
    Silver Arrow , push/pull
    2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
    GTX560 GB OC @910/2400 0.987v
    Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
    CM Storm Scout + Corsair HX 1000W
    +
    EVGA SR-2 , A50
    2 x Xeon X5650 @3.86GHz(203x19) 1.20v
    Megahalem + Silver Arrow , push/pull
    3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
    XFX GTX 295 @650/1200/1402
    Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
    SilverStone Fortress FT01 + Corsair AX 1200W

  24. #299
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by Sam_oslo View Post
    Since partners got total freedom to do whatever they want with GTX560, is there any hope to see a tipple-GTX560 on a single PCB from some partners?. That could beat the hell out of 6990 too, LOL.
    The PCB would be too large, heat and power draw - unmanageable.
    Also, it doesn't make much sense to use a lot of smaller GPUs instead of a few bigger ones because you run into scaling issues.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  25. #300
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by saaya View Post
    yeah i dont get it either...
    its not like the 300W limit actually limits power consumption of systems anyways...
    people who want extra power get up to 4 300W cards...

    actually it would be more power efficient to allow a single vga with 400W then, cause then people who want ultra performance would run 2x400 instead of 3x300 or 4x300...
    and people who want this kind of power overclock anyways, so then the limitation doesnt matter at all...

    300W is clearly not enough and limiting cards... so im surprised why nvidia and amd are not pushing for more...
    think about it, if there wouldnt have been a 300W limit then nvidia could actually have launched a GTX480 Ultra with 512sps and a 350W tdp and a massive heatsink...

    and ati could have launched the 5970 at full 5870 clocks instead of clocking it down...

    ah well... if ati and nvidia dont care...
    Right. AMD and Nvidia surely would sell more if the PCI-E spec allowed +300 W cards.

    Those monster cards probably sell with loss, but have an marketing impact so they actually increase sales across the board. Thats the sole reason why the cards exist. The market is non-existent and half of the cards are sent around/given for free(press samples, competition prizes) anyway to gain as much PR as possible.
    Last edited by Calmatory; 02-08-2011 at 08:19 AM.

Page 12 of 42 FirstFirst ... 2910111213141522 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •