Wait don't the PCI-E connectors have a power limitation of 300W? That says 350W.
Printable View
No, 8 pin is ~150 watt rating each.
EDIT: I guess I should say 150+150+75, technically the standard for two 8 pin PCI-E connectors should be around 300watts but the the pci-e slot supplies an additional 75 watts. Keeping in mind the rating is just a standard and not an absolute most of us exceed the various standards with overclocking.
http://tof.canardpc.com/view/ad50609...6a8e7087f1.jpg
Well, the author knows nothing and it's pure speculation and guesses ...
Maybe when Nvidia says that this architecture will have better performance x watt, it does not mean consuming less but better use of the power necessary for the chip to work, on average equal to or use of Fermi, but with better performance
*sorry my english ugly ... I'm practicing the language
Is it basically what happend by going from 40 to 28nm .. Well this have been allways part of any talk and launch, just watch all release slides of gpu from thoses last 6years, and you will get an "improved performance / watts " slide somewhere ( AMD or Nvidia ). This is just cause with all new generations, AMD, Nvidia, Intel ( same for ARM chips makers ) try to improve the performance / watts. ( and they do, well normally ( Fermi was a real accident on this part ) .
This is a part of the job for each new generation ( CPU, GPU, monitor ), improving performance / watt
Do PCI-E v3.0 slots have the ability to provide more power to the card from the board itself vs PCI-E v2.0 ???
I thought they were limited by the board's actual physical properties (i.e. too many watts would simply fry the board) but I could be well wrong
I never heard of something like that related to AIB's especially in cases where the power is supplied directly from the power supply via the pci-e power connectors. I could possibly see pushing things a little far with a OC'd video card that's strictly powered from the pci-e slot through the mb.
At any rate you would be reading about allot of meltdowns if that where the case, especially on this forum where OC'ing and voltage tweaking push power draw much higher than normal along with the use of power supplies with enough current reserve to weld with.
^^^Quote:
Originally Posted by Kyle Bennett
Dissapointing, if true.
Just saw that myself... will be interesting to see how it does with launch drivers, especially. I hope the pricing isn't stratospheric however. Who knows how accurate this is though...
With pre-launch drivers that they're still pushing on quickly, it doesn't sound disappointing to me assuming pricing is decent. With their smaller die size I would think they could compete on both price and performance here.
and later on he said he'll keep his 7970 XFire setup ...
Quote:
Originally Posted by Kyle
if i remember correctly that in order for pci-e 3.0 to have provided more power they would have to have increased the traces on the mobo which would have really added to the complexity and cost. From what i remember pci-e 3.0 provides the same amount of power as pci-e 2.0
So? What he said basically made the cards sound like they're going to trade blows. His system is still state of the art, so if the GK104 and 7970 are really similar performance he has no reason to move. Besides the big chip isn't here yet. Lets see if he is going to say that when he gets that on his hands. Back a few year ago he did not switch his 5870 Crossfire setup to a GTX 480 SLI when it first came out anyway.
Matches my statement of about 1.5x over GTX580.
We all know shaders can end up idle. By allowing shaders that are being accessed to use the resources of the idle shaders you fixed that problem; greatly improving efficiency per shader.
Anyway, of course NVidia are contemplating calling this the 680... If you could beat the opposing company's high end with what would've been a midrange chip that's smaller and know your opponent will take awhile to counter attack, wouldn't you do the same exact thing? I'm thinking the 6xx series will be short-lived, and they'll drop the 7xx (headed by the GK110) when AMD refresh this fall, where we'll see the 680 slightly tweaked with higher clocks as the GTX 760ti.
There is absolutely NO reason to upgrade if you have XF'ed 7970's. Name a game that XF'ed 7970's can't max out at perfect frame-rates! Only reason there would be is if a huge title comes out using physx that the physx makes a huge difference on.
Not too disappointing. They said it beats the 7970 in games and game related benchmarks. So it's better than AMD's best where it counts. Definitely not disappointing to me.
Of course, with this beating the 7970, that means the GK110 may be the next G80.
I think it depends. Atleast for the consumer. If nvidia and AMD performance is very similar to each other, price cuts are not a necessity. Since they perform similarly, they can be priced close to each other. Since AMD pricing is at an all time high for the company, this leaves room for very high pricing, particularly for the ultra high end. This is strictly hypothetical but if gk104 outperforms 7970 by 20% and is priced around $500 AMD would be forced to drop the price of 79xx series 100 dollars or more. The lower Nvidia pricing, the more AMD has to drop the prices of their cards. If Nvidia prices high enough of course, AMD won't have to drop prices.
Also, I can't be the only one who gives two flying f***s about GK104. I want some top-end news on GK110.
If GK104 is the only thing they're farting out the next few months then I'll stick with my GTX580.
if GK104 beats 7970 and is cheaper then its worth a look but of course we want the big guns.
:)
You really belive that this card is going to be named gtx680 and its going to be their midrange?
I dont think so.
Why not? 6870 was AMDs midrange and took the "870" from the previous fastest card, the 5870. GK110 is not ready, Nvidias midrange chip can compete with AMDs performance/highend-hybrid, so this is just marketing. GK110 probably will be the 700 series. Would make sense because it is rumored to be more capable on the shader front.
OK. But it doesnt make sense to me, neither did this AMD previous number jump. Perhaps they try to catch few more sells by giving the card the higher number, hoping that some of us will not check the card perfomance before purchase. :ROTF:
Well, I hope people look at performance and price and compare it to the previous gen. If it's not at least a 40% improvement in fps/$, show it the finger. At least that is what I'm gonna do.
I dont care what its called or where they think it fits in the range, if its faster and cheaper than a 7970 then its worth a look. I think its a bit unrealistic to expect a mid range card to beat the oppositions flagship, even if they could do it they wouldnt as its not economically sensible, they could charge more for something slower. Besides, when has a flagship gpu ever costed us$399??
:)
If hot clocks are still there than GK104 could cost higher to produce (lower yields) and consume more power than Tahiti despite having less die size.
Radeon 9700 pro was $399.
But, if nvidias midrange smokes amds high end, I would expect high prices from nvidia since amd wouldn't be able to compete performance wise. However, I think the gtx680 is the high end single card. I'm excited to see nvidias offerings in the upcoming months.
Sent from my SGH-T989 using Tapatalk
It better do a lot better in games then 7970, otherwise it will be disappointing to me. But if it does turn out to be the midrange chip, which seems to be the case, it will be fantastic on the other hand. A midrange chip beating the opposition's high end chip hasn't happened for a long time. GK110 better come in a decent time span though.
That was a while ago mate, those things were near double that here, I'm
Pretty a mate of mine paid near $1000 for a 9800 all in wonder ..... Ouch
Don't you worry, Nvidia will bleed us if it's really that fast, that's why they are giving it a high end name, so they can attach a high end price tag to it ..... You will see
:)
What makes you think Tahiti is high end? Maybe it's a bit more than a performance card, but by current standards, it's not high end. Had they made it 450+mm2 (more units) or clocked it significantly higher, pushing thermals/power to what the market is willing to accept these days (250w), then it would deserve this classification.
If it had GTX580+60% performance@stock, no problem. Still impressive of GK104, though. That would be like the GTX560Ti competing with the 6970 when in reality it was about 10% slower than the 6950.
Same here, sort of. I am nearing my upgrade cycle time, so I want to buy new cards within the next 4, at worst 5 months (not sure I have the patience, though). And I'd rather not go for 3-Way SLI to future proof myself for the next couple of years. But if high-end Kepler is cancelled then I guess I may have to... Really rather stick to two cards tops, though.
I also don't like 2GB of VRAM. Seems barely enough for my 30 inch display even today, and who knows what's coming later on. And 4GB of VRAM would be too much, overkill and a waste of money. 3GB sounds so much better... :(
This could also be named 670 ? and maybe later there will be a 675 and 680 ?
http://semiaccurate.com/2012/03/07/t...nm-production/
Troll?
Bolded parts for emphasis on why I think it's little more than a troll by him. At least for the nVidia-related sections.Quote:
Originally Posted by Charlie FUD-curate
Charlie again makes no sense. Nvidia is too dumb, no one else is having 28nm problems...and then suddenly a complete turnaround and ALL 28nm production, not just Nvidias, is halted. Charlie is an idiot.
Charlie, no use crying, you were not invited, 12 days, so for the Kepler selected. It was expected that level a comment coming from this guy after that Nvidia did not sent her invitation to the premiere of Kepler!
I definitely believe there was some kind of issue with 28nm when you look at how many 79xx based AMD cards are oos at NE.
I am not sure you know what you are talking about. You don't cure a flu by killing the patient and reanimating him. Nothing similar happaned on 40nm either. If they needed to stop production, then it must have been a different, very serious problem, not just yields. Like machine failure, issues with chemicals, or possibly a dozen of other problems that can happen inside a fab.
SKYMTL reported earlier that AIBs have problems getting Tahiti chips and boards. Could be cause by this pause in production.
The difference between GTX 580 1.5GB and 3GB is negligible even at extreme high resolution with 4x/8x AA
http://www.hardware.fr/focus/50/test...-surround.html
In other words, even 1.5GB is more enough enough for your 30 inch display
Apparently, a certain insider in PCInLife was saying "forget about 399(USD), even 449 might be unachievable with current costs"
So there you go.
Well that's p***** on my campfire.
That's short-term thinking though. If that trend were to continue and AMD got out of the high end segment, in the long run it would be bad for consumers because we'd be stuck with high-priced cards with mediocre improvements each generation.
For most games that might be true but some of the newer games can easily use more than 1.5GB. I saw 2.2-2.4GB of vram usage playing Skyrim with the HD texture pack and 8xAA.
Too bad really if thats the case.
I'm starting to think that Huang's "disappointment" for AMD's new cards might not be much about performance.
It could easily be about stocks/volumes knowing AMD came first with their next gens., but who knows for sure.
And that thing about being patience w/ Kepler, that could mean nvidia has difficulties too(stocks too?) - maybe... 28nm and whatnot.
nVidia being quite about Kepler is one thing, but low prices might not be one of them...
Perhaps this is part of the reason why GK110 is pushed back so far?
I seriously doubt this is only because of NVidia though. TSMC may love NVidia, but no way they'd completely freeze their 28nm process ONLY because of them if they had plenty of business from AMD... Although, they may still be mad that AMD tried to only do cayman on 32nm (which is why TSMC cancelled 32nm altogether).
If it was having major yield problems though, maybe that's part of the reason AMD were pricing so high.
Big boy chip @28nm now would be masochism I guess.
http://www.geforce.com/News/articles...dia-kepler-gpu
Quote:
Created in a dark, futuristic setting, the demo utilized a host of advanced rendering techniques that smoothly tessellated and morphed facial features, created realistic street scenes using point light reflections, and replicated the work of the best movie directors through the use of fine-tuned out of focus bokeh filters. The only downside was that it took three GeForce GTX 580s to run the demo in real-time.
Quote:
Today GDC 2012 is upon us, and once again Epic has shown the Samaritan demo, but this time with a twist - instead of three GeForce GTX 580s, the demo was shown running on a single next generation NVIDIA graphics card.
Looks like the March 12th date just got some more substance. It claims MSAA on the 580's vs FXAA on Kepler.Quote:
If you’d like to learn more about Samaritan, Unreal Engine 3, or any other aspect of the demo, check out our in-depth Samaritan deep dive from May 2011.
For more info on our next-generation Kepler graphics card, stay tuned to GeForce.com.
Speaking of the "Samaritan" demo, Rein noted that when they showed it off last year, it took three Nvidia cards and a massive power supply to run. However, they showed it off the demo again that ran using a new, not yet released Nvidia card and one 200 watt power supply.
http://www.gamesindustry.biz/article...tan-into-flash
I see Epic still touch money from Nvidia for marketing purpose... i hope for them the samaritan demo have been optimised since last year.
since we dont know the details of a TSMC factory stall, I look at the bright side and assume things will be better once they start going again. you could imagine something went wrong and needed fixing, but I'm imagining them just improving things. we may never know
GeForce GTX 680 Features Speed Boost, Arrives This Month, etc., etc. - TPU
Quote:
Third, the many-fold increase in CUDA cores doesn't necessarily amount to a linear increase in performance, when compared to the previous generation. The GeForce GTX 680 is about 10% faster than Radeon HD 7970, in Battlefield 3. In the same comparison, the GTX 680 is slower than HD 7970 at 3DMark 11.
Fourth, the NVIDIA GeForce GTX 680 will very much launch in this month. It won't exactly be a paper-launch, small quantities will be available for purchase, and only through select AIC partners. Quantities will pick up in later months.
At least here in Germany we don't have this situation. 14 Different 7970 in stock and half of them in stock nearly everywhere. The availability is practically perfect here and prices are beginning at 510$ (without VAT), which is ok. 7950 and the 77xx cards are also in stock in most places.
But still i can't believe that TSMC stopped their 28nm process since 3 weeks. No asian page notices that, not even Digitimes which are normally pretty good informed in case of TSMC. TSMC makes no press release on something that everyone will see in the Quarterly results, but which could make shareholders pretty angry if they'll only hear about it at the Earnings call.
Pretty much what I expected. GK110 is nowhere necessary so they are keeping that one under wraps until AMD steps up with something else, and hey presto, nVidia is ahead again. Dual GK104 coming in May is surprising as that's pretty much around the corner. That performance should last them for some time.
Also, it seems that there is massive potential if Epic's Samaritan demo required 3 580s, and now only one Kepler.
But 2,304 CUDA cores for GK110?! Wow, that's a monster.
Isnt 680 gk110 anyway? Either way it'll be a hot big beasty card! Same as last time really the faster card but over priced ah well.
That demo seems off, think, a massive system with 3x 580's thats what 1000w plus when you add in cpu etc...
Now a one card system an the whole thing is on a 200w psu? Even sandy oc'd is gonna pull half that so 100w is low/mid territory and i dont think thats beating 3 580's?
Just fishy really .
GTX680=GK104
The demo ran on a 2-way SLI setup last year, not 3-way. The third card was used for GPU-PhysX. This time, FXAA was used (instead of MSAA?). Also no info on fps. And a year of time to do optimizations. And GK104 has double the computing power (not performance!) of a GTX580. So it's plausible, you just have to think critical.
10% faster than Radeon HD 7970 in Battlefield 3 sounds bad. Is one of the games where 7970s performs worse compared to last generations, so the GTX 680 would be very close or below in other games.
It performs as expected or better for a smaller 256-bit 2GB card. At the moment it looks like the card will end up between 7950 and 7970, and given it has only 2 GB VRAM they should price it closer to 7950. Not that I think they will, most likely if it can beat 7970 in a few games it'll be $499 minimum simply because Nvidia likes their premium in the high end. And GK110 won't matter as long as it's not ready.
For be honest, i will not worry too much about the 7950... a 7870 pre oc to 1100mhz should close the 7950 ( the 7870 is what.. 10-15% slower of the 7950.. )... i suspect, the 7950 will not really be a card who will stay in memory of best sellers... ( i speak about prices, not performance wise against the gk104. )
http://weibo.com/fengke117Quote:
Originally Posted by Googlish
GTX670Ti
http://www.abload.de/img/gtx680_mfa_keplerx7uud.jpg
http://www.redquasar.com/forum.php?m...7234&fromuid=1
It seems tpu have not completely translate in their news the post given on Heise.de .. http://translate.google.com/translat...z-1465891.html
Is it only me that have the feeling that Nvidia delayed GK110 mostly because they saw GK104 would be enough to combat this series of AMD GPUs and will bring out GK110 to combat AMDs next refresh which would be a big win from Nvidias side (saves Nvidia plenty of time to better be able to focus on that next generation after Kepler) as there's no need for a "Kepler" refresh chip as GK110 will handle the next AMD update.
IMO if this is true would be that BIG WINNER what Charlie wrote about earlier haha.
It ran 4xMSAA last year, now it was FXAA (that would most likely allow single gtx580 to run it too). That removed memory bottleneck fermi had with the demo.. And i bet there has been a lot of other optimizations.
Basically, i cant see how it could be used other than to say, "it most likely is more powerful than fermi" but it cannot be used to even get idea how much.
Have you allready seen Nvidia who bring and have so much importance in the computing sectors, and have push so much in CUDA system... aim all around it ( quadro, Tesla ) .... aim all his communication about a "middle range " gamer " product now ? and nothing for Tesla, Quadro based Kepler ( who should double the Tflops available for computing ? ).
http://www.techpowerup.com/img/12-03-08/117a.jpg
http://memebase.com/wp-content/theme...mages/Milk.pngQuote:
GK104 Graphics Card Pictured?
Could this be the very first picture of NVIDIA's GeForce Kepler 104-based graphics card? This Mr. Blurrycam shot has been doing rounds in Chinese forums. While it may not seem convincing at first glance, several features of the card in the picture seem to match the layout of the GK104 reference PCB which was pictured, earlier. To begin with, on the top-right corner you can train your eyes to a deep cutout, for the unusual piggy-backed 6+6 pin PCIe power connectors. The rear panel bracket is a 100% match (in layout and design of exhaust vents), of the one with the true-color image of the GK104 PCB. The only feature that clouds the plausibility of this picture is "GeForce GTX 670 Ti" being etched onto the cooler's shroud. We're hearing more voices refer to the top GK104 part as "GeForce GTX 680" than "GTX 670 Ti". We're also hearing that NVIDIA will adopt a new GeForce logo, so that glaring "GEFORCE" badge on the top of the card looks plausible.
that would look good on my signature..
____________
here much cleaner shot
http://i.minus.com/ibuntvVd7suglP.jpg
:up:
Quote:
GK104 Dynamic Clock Adjustment Detailed
With its GeForce Kepler family, at least the higher-end parts, NVIDIA will introduce what it calls Dynamic Clock Adjustment, which adjusts the clock speeds of the GPU below, and above the base-line clock speeds, depending on the load. The approach to this would be similar to how CPU vendors do it (Intel Turbo Boost and AMD Turbo Core). Turning down clock speeds under low loads is not new to discrete GPUs, however, going above the base-line dynamically, is.
There is quite some confusion regarding NVIDIA continuing to use "hot clocks" with GK104, the theory for and against the notion have been enforced by conflicting reports, however we now know that punters with both views were looking at it from a binary viewpoint. The new Dynamic Clock Adjustment is similar and complementary to "hot clocks", but differs in that Kepler GPUs come with a large number of power plans (dozens), and operate taking into account load, temperature, and power consumption.
The baseline core clock of GK104's implementation will be similar to that of the GeForce GTX 480: 705 MHz, which clocks down to 300 MHz when the load is lowest, and the geometric domain (de facto "core") will clock up to 950 MHz on high load. The CUDA core clock domain (de facto "CUDA cores"), will not maintain a level of synchrony with the "core". It will independently clock itself all the way up to 1411 MHz, when the load is at 100%.
I see it now, woodscrews have been changed into anti-static bag, smart tactics!
That could be a block of wood as far as we know ..... Bring it Farken
:D
Hummm where have I seen this situation before in the past...
Me Gusta
Kyle expects the 680 to be faster at this point in time.
If and when we finally see the GK110 its likely going to be a monster performer. AMD may really have set the standard to low this time.
I wish good luck for reviewers for bench the card, know at what speed is running the card, and what happend during this time.. is the card at 750-780-800-900? what will be the result if you bench the card after 1 hour of play ? will it be the same after the card got high heat? Is it OC by App for gain some fps in benchmark, is the result will be the same when someone play 1hour of BF3? Does it detect games benchmark and adapt the OC profile, but is it working the same when gaming ?
I summarized what he said in the link part and what is going to be the case when the GK110 most likely. If the GK104 is similar to the 560 Ti vs 580 anyway.