-
nVidia 'Kepler' GeForce GTX 680 Reviews
-
-
-
-
-
-
Aye, the first thing I said when I saw the 7970 results were that it wasn't fast enough for the node transition. Nvidia have proven me right, and then some, this is actually faster than expected for the size of the core.
-
Great card, first time in ages that I want to get a high end card - power consumption and temperature to performance is just legendary on this thing, but so is the price :(
I might get one if they come down to around £350 for a custom dual fansink design, plus it would be great with an 8 + 6 pin custom design fir higher overclocking.
-
Compute has fallen :( Back to gaming :)
Can somebody point me to OCed (7970) vs OCed (GTX 680) comparisons please ?
-
About equal, once highly overclocked, both perform more or less the same. A 1250 6400 7970 does about 3550-3700 in 3D Marks X and this card too does 3600+ once overclocked. More or less at par at heavy loads once overclocked fully
-
-
Quote:
Originally Posted by
Monstru
So the Spitfire fits!?
Does if fit with the original backplate (that's how I want to use it)?
Does it have the same mounting holes as 580?
I just ordered an EVGA GTX680, I want to use the Spitfire and keep the warranty intact.
-
Quote:
Originally Posted by
akshayt
About equal, once highly overclocked, both perform more or less the same. A 1250 6400 7970 does about 3550-3700 in 3D Marks X and this card too does 3600+ once overclocked. More or less at par at heavy loads once overclocked fully
At 1325MHz.
http://i44.tinypic.com/2h3ooat.png
-
Quote:
Originally Posted by
Tao~
Compute has fallen :( Back to gaming :)
Can somebody point me to OCed (7970) vs OCed (GTX 680) comparisons please ?
No one really knows whether compute has fallen since NVIDIA has yet to roll out finalized drivers that include compute optimizations.
Comparing OC to OC is very hard since every sample overclocks to a different level.
-
i cant decide about this card. in some tests i say ouch this is not good for nvidia why sould i buy this over 7970 in another test is say wooooooo i cant believe this is magnificent. looking game benchmarks of this card will going to make me manic depressive
-
At 1300-1350 Mhz even a Radeon 7970 will give you around 3800 points. But for that you need an Accelero/Custom Water cooling, not stock at least not unless you plan to use 100% fan with stock cooling :)
-
Everything is out of stock ..... that was fast .... 1 hr and 12 mins
-
2 Attachment(s)
Just wanted to put this up here so we can get a perspective:
Attachment 124758
Which led to......
Attachment 124759
And a price of $499. :)
-
Quote:
Originally Posted by
SKYMTL
No one really knows whether compute has fallen since NVIDIA has yet to roll out finalized drivers that include compute optimizations.
Compute has 'fallen' in the sense that this was not architected for a continuing the upward compute performance curve and it shows. Not that it was necessary - Fermi did that and AMD needed GCN compete in that segment. Moreover from talks GK110 seems to be "big,hot,compute" chip.
Quote:
Comparing OC to OC is very hard since every sample overclocks to a different level.
Yeah I understand, but you could take the median value that these cards manage on air with stock coolers - I guess I'll have to wait a few days for further insight.
Eagerly awaiting an AMD price cut ...
-
Quote:
Originally Posted by
SKYMTL
Is that GK110 the one that comes in Q4 or the old 580? If its the new one that would be crazy
-
Quote:
Originally Posted by
SKYMTL
Basically sums it up.
That 499 price tag could've been lower if not for 7970's high price upon release. No?
But that didn't happen, and I ain't gonna upset the mood in here so...:up:;)
-
Quote:
Originally Posted by
Mech0z
Is that GK110 the one that comes in Q4 or the old 580? If its the new one that would be crazy
The GF110 was about equal to the GF100 in terms of die area. :)
-
1 Attachment(s)
680 looks really nice and given that I might have to RMA my 7950 because of a growing fan bearing issue I might jump over.
Quote:
Originally Posted by
SKYMTL
That is quite inaccurate man. :/
This is actually how it looks: (521mm^2 365mm^2 294mm^2)
Attachment 124763
-
Quote:
Originally Posted by
Kallenator
680 looks really nice and given that I might have to RMA my 7950 because of a growing fan bearing issue I might jump over.
That is quite inaccurate man. :/
This is actually how it looks: (521mm^2 365mm^2 294mm^2)
Attachment 124763
This is actually how it looks? NOOOOO
This is actually how it looks:yepp:
http://www.abload.de/img/gggge2k1z.jpg
-
i think Kall is right. when you deal with area, its common for people to be off by the factor of 2
-
-
Bah!!! I just visited Newegg and selected an EVGA GTX 680, added it to my cart, and got all the way through check out to the point where you make that last final click.... and when I did, the card was removed from my cart and since then all GTX680's have been out of stock. :(
-
Quote:
Originally Posted by
Manicdan
i think Kall is right. when you deal with area, its common for people to be off by the factor of 2
Maybe my calculator was wrong. ;)
-
Quote:
Originally Posted by
Tha Last Meal
This is actually how it looks? NOOOOO
This is actually how it looks:yepp:
Watch out, fanboy who failed at geometry :p <- humorous intent of reply
Quote:
Originally Posted by
Manicdan
i think Kall is right. when you deal with area, its common for people to be off by the factor of 2
Thanks ^^
Quote:
Originally Posted by
Monstru
Here's the full pic, if you really want to draw some conclusions... :)
And a bonus one...
Thanks, but it's a bit hard to see if the intention is to compare between. ;)
It really isn't to draw conclusions or anything like that, I just noticed how far off Skymtl was and given that he uses this in his excellent review I think it's quite sloppy.
-
319 mm², not 294. Please compare die sizes with or without "service area", not one to the other.
-
Quote:
Originally Posted by
Kallenator
680 looks really nice and given that I might have to RMA my 7950 because of a growing fan bearing issue I might jump over.
That is quite inaccurate man. :/
This is actually how it looks: (521mm^2 365mm^2 294mm^2)
Attachment 124763
You are correct it seems. The image has been removed until I can modify it. Thanks!
-
Take a caliper and measure some dies while you're at it :)
-
Quote:
Originally Posted by
Kallenator
Watch out, fanboy who failed at geometry :p
Take it easy man I'm just kidding with you :)
-
I always thought that AMD had some secret drivers lying around because the shader scaling from 7870 to 7970 is horrible. 60% more shaders for 20% more performance.
Though since it didn't appear with GTX 680's release, sounds unlikely. All hail price drops, though I won't be buying anything this round.
-
Quote:
Originally Posted by
SKYMTL
You are correct it seems. The image has been removed until I can modify it. Thanks!
Hey, no problem =)
Btw, I wouldn't mind fixing it for you.
Quote:
Originally Posted by Tha Last Meal
Take it easy man I'm just kidding with you
I just tried to be funny back ^^
-
Hmmm, so what if the GTX 580s fall in price to £200 like the GTX 480s did?
A single 680 would probably still be better to have than a pair of 580s, not for performance, but for temperatures and power consumption too.
Hurry up price wars!
After reading more reviews my opinion is that 1 GTX 680 >>> 2 GTX 580s. Its only a slight bit behind a GTX 590 in most games, with just a little more power consumption than a single GTX 560 ti, and the temperatures are unbelievable for a high end card.
I must have one.
-
Quote:
Thanks, but it's a bit hard to see if the intention is to compare between.
Just showing that GK104 is the smalles high-end chip since HD 4870 :)
-
Quote:
Originally Posted by
Kallenator
I just tried to be funny back ^^
np,:)
-
I like gk104, very good chip, efficient and fast. But I vote for a custom cooled (Twin Frozr, DirectCU) with 4GB.
After gf104 and gf114, Nvidia make a midrange that can really fight with top Amd.
And HD7990 vs GTX690 will don't change anything in this fight until summer.
Nvidia goes for gk110 after summer.
Amd?
-
I see that the memory/vrm cooling is separated from the main core heatsink... Universal WB should work fine. Although people should be careful with memory/vrm temperature since they can't me measured on OS like lightning cards :down:
People, i see that this card uses the GK104. The upcoming GK110 will power up something like the "GTX780"? Is there any expected release?
-
Quote:
Originally Posted by
blindbox
I always thought that AMD had some secret drivers lying around because the shader scaling from 7870 to 7970 is horrible. 60% more shaders for 20% more performance.
Though since it didn't appear with GTX 680's release, sounds unlikely. All hail price drops, though I won't be buying anything this round.
About 50% more shader power and 50% more bandwith wich translates into 20% more performance. That chip is seriously bottleneck somewhere.
-
-
Quote:
Originally Posted by
Gilgamesh
I like gk104, very good chip, efficient and fast. But I vote for a custom cooled (Twin Frozr, DirectCU) with 4GB.
After gf104 and gf114, Nvidia make a midrange that can really fight with top Amd.
And HD7990 vs GTX690 will don't change anything in this fight until summer.
Nvidia goes for gk110 after summer.
Amd?
I doubt they are releasing it this year, the gk110 will literally be the 780GTX and nvidia don't need release it for a year or maybe if you're reallu lucky November. The next 'on the cards' will be dual gpu's both from AMD and nvidia just like you mentioned.
-
Any SLI reviews?
Or there are no drivers for that yet?
-
3 Attachment(s)
-
Quote:
Originally Posted by
Iconyu
I doubt they are releasing it this year, the gk110 will literally be the 780GTX and nvidia don't need release it for a year or maybe if you're reallu lucky November.
AMD averages 14 months between product releases, so I doubt we'll see anything even resembling HD8xxx until Q4 at the very earliest.
nVidia typically operates on a 12 month product cycle, with the notable exception called Fermi.
For both AMD and nVidia, it typically takes them about 9 months go release all products from each cycle. The last two dual-GPU cards nVidia released came only 5-6 months after the first cards in each series or refresh were launched.
Depending on what AMD does though, nVidia has moved forward products quite a bit. GTX580 came only 8 months after GTX480. Also, GK110 taped out in early Feb, 2012... which makes Q3 the earliest possible release time frame if they really needed to rush it out.
The next 'on the cards' will be dual gpu's both from AMD and nvidia just like you mentioned.[/QUOTE]
-
-
Quote:
Originally Posted by
Andrew LB
Maybe Vince can clear this up but was the card running at 1800+ MHz in EVERY 3D test in 3DMark11? I ask because GPU Boost fluctuates the clock speeds in 3DM11:
http://images.hardwarecanucks.com/im...TX-680-122.gif
Unfortunately, this will mean that validating clock speeds on the GTX 680 will require a whole new set of rules since a clock offset set in software may not be carried over to all sections of a benchmark.
-
Skyr benchs is where it really shines! Wow
-
Wow, NVIDIA has a a killer core on their hands. The price, performance and power consumption are all massive improvements. I like this new NVIDIA a lot. Now I can just hope this pushes the HD 7970 prices down to something like $450.
-
3 times more shaders! but only 30% or so more performance ? Not really a worthy upgrade over GTX580 TBH.
-
if it was 3x the old shaders it would have been a 450w gpu and around 800mm2, after the shrink!
worthy upgrade or not is your opinion though.
-
Quote:
Originally Posted by
ice_chill
3 times more shaders! but only 30% or so more performance ? Not really a worthy upgrade over GTX580 TBH.
It is important to understand the architectural development on Fermi and Kepler and how they differ.
Essentially, Fermi was die size limited which meant NVIDIA had to add performance without increasing the transistor count. In order to do this, they ran the shader domain at double the speed of the other processing stages. Unfortunately doing so increased heat production and power consumption.
GK104 meanwhile doesn't have the same limitation partially due to the 28nm manufacturing process and partially due to optimizations within the architecture that limit the number of transistors needed for certain processing stages. This has allowed for a drastic increase in the core count but in order to keep power consumption to reasonable levels, NVIDIA is now running the clocks at a 1:1 ratio. There are some other changes like PolyMorph Engine reductions (even though they do run at close to double the speed) and caching changes that can also count towards the non-linear performance increase from Kepler GPC to Fermi GPC.
If you have any particular questions. Let me know. :)
-
Quote:
Originally Posted by
[XC] gomeler
Wow, NVIDIA has a a killer core on their hands. The price, performance and power consumption are all massive improvements. I like this new NVIDIA a lot. Now I can just hope this pushes the HD 7970 prices down to something like $450.
Well then again, if the 7950 / 7970 had been much more appropriately priced, the GTX 680 would have cost even less :p:
Remember that GK104 was always meant to have been this generations mid range, not the high end. Now Nvidia get to sit back and take it easy for a year or so with the GK104 performing so well, and then whenever AMD manage to catch up yet again spurting their 'Verdetrol' garbage marketing, out comes the GK110 which beats AMD again with a year old architecture.
And please note I was an ATI fanboy up until the HD 5000 range, both the 6000 and 7000 ranges have disappointed me significantly compared to what Nvidia have had on offer since their Fermi refresh.
Quote:
Originally Posted by
ice_chill
Not really a worthy upgrade over GTX580 TBH.
Well the GK104 was meant to have been an 'upgrade' over the GTX 560ti, not an upgrade of the GTX 580. GK110 was meant to have been the GTX 580 replacement. I was meant to have been purchasing two GK104s right now for <£450 for my next SLI setup, but due to AMD's underwhelming HD 7000 cards that wont be happening.
-
Quote:
Originally Posted by
bhavv
Well then again, if the 7950 / 7970 had been much more appropriately priced, the GTX 680 would have cost even less :p:
no way to know that
the 7000 gpus are nothing special when it comes to price/perf, but if you wanted lower power, and more perf, you paid for it, simple as that. it beat the competition in every way but pricing. if they launched at 450$ for the 7970, i doubt nvidia would have sold the 680 for anything under 450$ since they can still push it as being a faster gpu and worth more.
-
I was hoping it would fare better in luxrender =(
-
Quote:
Originally Posted by
Manicdan
no way to know that
the 7000 gpus are nothing special when it comes to price/perf, but if you wanted lower power, and more perf, you paid for it, simple as that. it beat the competition in every way but pricing. if they launched at 450$ for the 7970, i doubt nvidia would have sold the 680 for anything under 450$ since they can still push it as being a faster gpu and worth more.
Sorry, I didnt just mean the pricing, but also if the 7950 / 7970 performed significantly better and had given Nvidia a reason to have to release the GK110 to be able to compete in the high end.
That didnt happen.
-
Quote:
Originally Posted by
bhavv
Sorry, I didnt just mean the pricing, but also if the 7950 / 7970 performed significantly better and had given Nvidia a reason to have to release the GK110 to be able to compete in the high end.
That didnt happen.
trust me, if they had GK110 ready to go, it would be out now and selling for 800$
if they wait until september, they sure dont save money by sitting on a ready part, but they will loose about 200-300$ per gpu that would have sold between now and then.
and 2 months ago when the 7900s launched, they would have been to late in the process to make any drastic changes like launching in 2 months vs 10 months. their schedule internally has been set for september (if that really is their launch time) for atleast a year now.
-
I just dont believe that the 256 bit GK104 was ever designed or meant to have been released as a competitor for the high end GPU segment if AMD had pulled off what was expected from their 28 nm architecture.
-
There goes the Nvidia's haters, if it outperforms the 7970, its going to cost more. Nvidia was being modest actually when they said they expected more from AMD.
Just imagine if gk110 came out first and gk104 was released second. This would have been a beat down worse than gtx 8800.
Gk110 is going to be a beast. At 649 or 699, gk110 will completely warrants its price as it should step on the toes of the 7990.
-
Does the 680 support Bitstreaming audio?
-
Quote:
Originally Posted by
kam03
does the 680 support bitstreaming audio?
yes.
-
Here's my review if anyone's interested - http://bit.ly/GTX680Review
-
Quote:
Originally Posted by
Manicdan
trust me, if they had GK110 ready to go, it would be out now and selling for 800$
if they wait until september, they sure dont save money by sitting on a ready part, but they will loose about 200-300$ per gpu that would have sold between now and then.
and 2 months ago when the 7900s launched, they would have been to late in the process to make any drastic changes like launching in 2 months vs 10 months. their schedule internally has been set for september (if that really is their launch time) for atleast a year now.
Exactly. Holding back a new core that could absolutely stomp AMD would make very little financial sense. Launching GK104, which is a smaller die on the unknown 28nm process gives NVIDIA the chance to figure things out before they release a massive 250-300w monster. That is of course if NVIDIA chooses that route. Who knows, maybe they'll instead release a dual-GPU card based on binned GK104 cores. I'd be happier to see NVIDIA adopt the AMD route of more smaller, efficient cores.
Now, how in the heck is AMD going to counter this? Can't really ramp up the clocks on Tahiti to gain another 30% in performance. Couldn't imagine AMD would want to get in to the gigantic GPU market that NVIDIA has traditionally run in. Do they release a 7990 with binned Tahiti cores? Or figure out their scaling issues between pitcairn and tahiti(7870/7970)? Would be great if it all came down to driver optimizations.
-
Quote:
Originally Posted by
[XC] gomeler
Exactly. Holding back a new core that could absolutely stomp AMD would make very little financial sense.
I disagree, holding it back means that they dont need to spend as much on R+D for their next card. They have a card out now that comfortably beats the 7970, so they had no need to release a better one.
The GTX 680 was actually originally meant to have been the 670 Ti:
http://www.xtremesystems.org/forums/...-as-GTX-670-Ti
Im sure that the GTX 680 would have at least been a 384 bit card with an 8 + 6 pin power conector.
-
Mmmm, the card looks great until I read the XBit review and saw the overclocking results. Is it me or does it perform slightly worse than the 7970 when they are both overclocked. If this is the case, then I'll have to think hard and long about which to go for, as I generally OC anything I buy.
-
Quote:
Originally Posted by
bhavv
I disagree, holding it back means that they dont need to spend as much on R+D for their next card. They have a card out now that comfortably beats the 7970, so they had no need to release a better one.
The GTX 680 was actually originally meant to have been the 670 Ti:
http://www.xtremesystems.org/forums/...-as-GTX-670-Ti
Im sure that the GTX 680 would have at least been a 384 bit card with an 8 + 6 pin power conector.
Perhaps. With the GTX 680 in theory being cheaper to manufacture than the HD 7970 it would make sense to hold the GTX 680 as the flagship card until AMD releases their HD 7990. Then, releasing a GTX 685/690 based on a 300w Kepler core(with a die size somewhere around GF100-G200) would set them up to compete on performance with less expensive silicon going in to this performance crown card.
Still, I doubt they developed GK104 and GK110 at the same time. Makes more sense from a risk management point to develop a small core on 28nm. Beats the hell out of having another GTX 480.
-
Quote:
Originally Posted by
Motiv
Mmmm, the card looks great until I read the XBit review and saw the overclocking results. Is it me or does it perform slightly worse than the 7970 when they are both overclocked. If this is the case, then I'll have to think hard and long about which to go for, as I generally OC anything I buy.
When both cards were overclocked in that review, there is so little difference between them that I wouldnt base my purchase decision on FPS alone. You have to remember that a 7970 is being overclocked from 925 to 1150 Mhz, and the GTX 680 from 1006 to 1186 Mhz in that review, so the 7970 is being given a larger boost over its reference clocks than the GTX 680 is.
Quote:
Originally Posted by
[XC] gomeler
Still, I doubt they developed GK104 and GK110 at the same time. Makes more sense from a risk management point to develop a small core on 28nm. Beats the hell out of having another GTX 480.
I dont think that they did either, but I think that a 384 bit GK104 was being developed as the GTX 680, and Techpowerup have shown solid evidence that this 256 bit version was originally the GTX 570 Ti. GK110 is coming out much later.
-
Quote:
Originally Posted by
SKYMTL
It is important to understand the architectural development on Fermi and Kepler and how they differ.
Essentially, Fermi was die size limited which meant NVIDIA had to add performance without increasing the transistor count. In order to do this, they ran the shader domain at double the speed of the other processing stages. Unfortunately doing so increased heat production and power consumption.
GK104 meanwhile doesn't have the same limitation partially due to the 28nm manufacturing process and partially due to optimizations within the architecture that limit the number of transistors needed for certain processing stages. This has allowed for a drastic increase in the core count but in order to keep power consumption to reasonable levels, NVIDIA is now running the clocks at a 1:1 ratio. There are some other changes like PolyMorph Engine reductions (even though they do run at close to double the speed) and caching changes that can also count towards the non-linear performance increase from Kepler GPC to Fermi GPC.
If you have any particular questions. Let me know. :)
I thought shaders were still running twice the core frequency on the 680?
-
Quote:
Originally Posted by
OCX600RR
I thought shaders were still running twice the core frequency on the 680?
I think it was the early gpuz that was reporting that in error, 1:1 now.
-
Quote:
Originally Posted by
Russian
nice one thanks :up:
-
As usual I like sweclocker's reviews: http://translate.google.com/translat...mt-sli&act=url
Luckily I'm a swedish speaking Finn so don't need to use google translate either. It has SLI vs Crossfire tests too, overclocking, power consumption under both cirumstances for both ati and nvidia etc. Nice to see a maxed out stable OC on GTX 680 draws just marginally more power than a stock HD 7970.
http://www.sweclockers.com/image/dia...e406d1666ceed9
On the other hand HD 7970 allows for a slightly bigger OC potential comparing with standard GPU bioses at least (GTX 680 could probably use a little higher than 1.1v even on standard cooler I think, at least 1.125~1.150 IMO and I personally wish 1.150 had been max with default bios which would be probably enough to get to a roughly similar OCability ratio as HD 7970 too).
http://www.sweclockers.com/image/dia...ceb89661d0234b
-
I would imagine the GK110 will add the features that were subtracted from this release. maybe its something like Intel has started to do with separate consumer target arc and the professionals with a separate arc. So that means GK104 = SNB and GK110 = SNB-E.
Very good card a excellent alternative to 7970 or for that matter 7950. Its almost weird that early reviews pointed to a huge loaded consumption whereas the latest one dont....
Non the less AMD cant not just OC 7970 and sell it off against the 680 it has too much in efficiency.
-
I'm hearing whisperings from the AMD camp... They don't sound scared.
-
Quote:
Originally Posted by
Russian
I'm hearing whisperings from the AMD camp... They don't sound scared.
Well can you go into any detail?
Not worried as in dual card coming? Or single.chip faster card scenario?
Sent from my GT-I9100 using Tapatalk
-
Quote:
Originally Posted by
Russian
I'm hearing whisperings from the AMD camp... They don't sound scared.
Of course, what other whisperings would be coming from the competition, "time to close up shop they beat us, hug me"...
-
Quote:
Originally Posted by
highoctane
of course, what other whisperings would be coming from the competition, "time to close up shop they beat us, hug me"...
lol....
-
Quote:
Originally Posted by
highoctane
Of course, what other whisperings would be coming from the competition, "time to close up shop they beat us, hug me"...
Ok, I got a hearty laugh out of that one.
Trust me, when I can share more details I will. THE WHOLE WORLD WILL KNOW! ( I hope )
-
@ Russian and/or SKYMTL: Any clue how the quality (not speed) of the video transcoding is? I got to check all the reviews out there ,see if anyone looked at quality too but kinda lazy :)
-
Quote:
Originally Posted by
jjj
@ Russian and/or SKYMTL: Any clue how the quality (not speed) of the video transcoding is? I got to check all the reviews out there ,see if anyone looked at quality too but kinda lazy :)
I haven't looked in-depth into the quality, but it is interesting because with the only provided software we were given there isn't much flexibility with what you can do when it comes to GPU hardware acceleration. CPU based actually lets you pick between quality and performance while GPU accelerated does not. I think that we'll need to get some more software available before we'll be able to test this appropriately.
-
Quote:
Originally Posted by
Russian
I haven't looked in-depth into the quality, but it is interesting because with the only provided software we were given there isn't much flexibility with what you can do when it comes to GPU hardware acceleration. CPU based actually lets you pick between quality and performance while GPU accelerated does not. I think that we'll need to get some more software available before we'll be able to test this appropriately.
Thanks for the answer.
-
Quote:
Originally Posted by
Russian
I'm hearing whisperings from the AMD camp... They don't sound scared.
Welp, because if anything, <$400's where all the money's at after the halo product.
And GK104 serves to highlight Tahiti's issues more than GCN's adaptability and perf/mm (+W) in general.
Keep in mind that the top AMD chip hasn't been the epitome of efficiency (Barts vs Cayman, Pitcairn vs Tahiti), and a lot of front-end bottlenecks present at the performance level of Tahiti/GK104, are much less present at even the GTX580/Pitcairn level. And GK106, if there is one, still isn't ready.
As for Tahiti... AMD's launching a Ghz edition, EOLing the old stuff (?) and probably pricecutting it $50.
Not sure if they can drive the power usage down with lower voltage, but they sure should start using crappier PCBs like nVidia is doing now :v
-
Quote:
Originally Posted by
Macadamia
crappier PCBs like nVidia is doing now :v
Please expound upon your statement here.
-
a GF114 selling for $220 prolly has better build than what's on the GK104 now.
http://www.expreview.com/img/review/...ga560Ti_05.jpg
Not that it's an attack on nVidia- in fact if anything, AMD should just do the same thing and cheapen up on official PCBs.
After all nobody burned their 570 after some mayhem-oops
-
has anyone got a quick link to a review featuring a 680 vs 5970 for 2560x1600 in a few different games? Bonus points for overclocking either one of those :)
-
Quote:
Originally Posted by
UrbanSmooth
Please expound upon your statement here.
NVIDIA is using cheaper PCB components to save costs.
-
Quote:
Originally Posted by
Russian
I'm hearing whisperings from the AMD camp... They don't sound scared.
I wouldn't say scared at all.
However, they're in a bit of a panic right now for one simple reason: they didn't have an alternate pricing strategy. The expectation was that the GTX 680 would continue the course set by the HD 7970: ie receive a price ~$600 - $625 mark.
-
Quote:
Originally Posted by
jjj
@ Russian and/or SKYMTL: Any clue how the quality (not speed) of the video transcoding is? I got to check all the reviews out there ,see if anyone looked at quality too but kinda lazy :)
The quality with the included app is ok. On the positive side, MediaCoder fully supports the GTX 680 if you wanted more control over transcoding / encoding.
-
Quote:
Originally Posted by
BeepBeep2
NVIDIA is using cheaper PCB components to save costs.
Have you not seen the junk some of AMD's board partners are putting on some of their "reference" cards? Particularly the HD 7950. It ain't pretty....
-
Quote:
Originally Posted by
SKYMTL
I wouldn't say scared at all.
However, they're in a bit of a panic right now for one simple reason: they didn't have an alternate pricing strategy. The expectation was that the GTX 680 would continue the course set by the HD 7970: ie receive a price ~$600 - $625 mark.
well they will have hard time with previous customers if they lower current prices lol
-
Quote:
Originally Posted by
SKYMTL
I wouldn't say scared at all.
However, they're in a bit of a panic right now for one simple reason: they didn't have an alternate pricing strategy. The expectation was that the GTX 680 would continue the course set by the HD 7970: ie receive a price ~$600 - $625 mark.
Hehe, I think it would've been absolutely ridiculous for AMD to not have seen this coming...
-
I wonder how did NVIDIA pull such nice performance on a 256 bit memory bus, I wonder how faster GK110 will perform.
-
Quote:
Originally Posted by
Russian
Hehe, I think it would've been absolutely ridiculous for AMD to not have seen this coming...
The architecture, yes. The price, no. ;)
-
Out of the last 5 Nvidia flagship GPUs, the 680 is most underwhelming. 8800/9800/280/285/480/580 all handily beat AMD. Why is the 680 losing in some benchmarks?
-
Quote:
Originally Posted by
SKYMTL
Have you not seen the junk some of AMD's board partners are putting on some of their "reference" cards? Particularly the HD 7950. It ain't pretty....
Are you talking,
"reference"
as in
"the PCB with AMD logo silkscreened and waterblocks are designed for"
or as in
"our PCB is almost the same layout as the reference with cheap :banana::banana::banana::banana: components"?
Most of the time, AMD reference PCBs are far overkill...
-
Quote:
Originally Posted by
BeepBeep2
Are you talking,
"reference"
as in
"the PCB with AMD logo silkscreened and waterblocks are designed for"
or as in
"our PCB is almost the same layout as the reference with cheap :banana::banana::banana::banana: components"?
Most of the time, AMD reference PCBs are far overkill...
One could assume the latter, almost surely.
-
Quote:
Originally Posted by
jaredpace
Out of the last 5 Nvidia flagship GPUs, the 680 is most underwhelming. 8800/9800/280/285/480/580 all handily beat AMD. Why is the 680 losing in some benchmarks?
doesn't the 680 use less power than those last 2-3 cards?
also, the 680 is the marketing department's flagship card, not the engineering department's flagship card.
-
Quote:
Originally Posted by
Generic user #2
the 680 is the marketing department's flagship card, not the engineering department's flagship card.
Quote of the year here regarding the GTX 680.
Bravo to you, sir.
-
Quote:
Originally Posted by
SKYMTL
The architecture, yes. The price, no. ;)
Actually, both. Even though they knew kepler was coming, they HAD to know what kind of performance it had.
You and I both know how AMD could find that info out.
-
Have there been any reviews with Adobe Premiere PRO benchmarks? The rendering engine is GPU powered by cuda based cards only. The amount of ram and # of cuda cores directly effect the performance. CUrious to see how the "tripling" of the cores scales the performance in Premiere's Mercury Playback Engine..