I wish that's all true, really do. Mainstream performance Kepler card easily beating a Radeon 7970, I just find it hard to believe. Possible, but Kepler would be ground breaking.
Printable View
I wish that's all true, really do. Mainstream performance Kepler card easily beating a Radeon 7970, I just find it hard to believe. Possible, but Kepler would be ground breaking.
OBR's posts would be sad if they weren't so hilarious due to his failure with the English language.
In English we call people like him attention wh0res.
Lets hope Kepler actually lives up to whatever hype is generated. Aside from the G80 as far as I call the only cards that ever managed this were the 3dfx Vodoo cards followed by the Geforce256 and the Radion 9700. It's not so easy to do now that $200 buys a card that will deliver around 60fps 1920x1080 on whatever console port is popular.
Is it possible that it's all just Nvidia propaganda to slow AMD sales? Surely if it is and Kepler fails to impress then it's all going to come back and bite them .....
:D
OBR got linked yet again in this forum and others, +1 more victory for him.
What I find completely hilarious is Charlie's complete 180 on his usual position with nvidia with nary an explanation as to why. Now he's quoting what ranks as good journalism and accuracy from his own site called semi-accurate, ironic no? Is anyone going to hold a gun to his head when he posts drivel? Don't think so, because he can just point to the website name. He doubts nvidia's underhanded tactics of hiring internet goons to post postive reports about nvidia. This is entirely not new and begun when AMD/ATI made cards that kicked the crap out of nvidia's offerings, which were either late and or not powerful enough or both. Now we're kind of seeing the same thing again except the only difference is that the news is coming from the former anti-nvidia crusader.
Whatever the case, it certainly generates clicks. And we will know exactly what is going on when everything plays itself out in the coming year. We will finally know if Charlie is the truth bringer or just another sucker would can be bought out. You can only buy reputation once, after that it isn't worth anything.
Another point to it, wasnt Nvidia trying to do away with hot clocks in this arch? If so why is the Shader clock higher than the GPU core clock?
Either way this Release turns out for Nvidia Im going to buy the Nvidia product because of drivers. I dont like the AMD drivers and they have been lacking in quality over the last year or two.
I want to say that was just a rumor, we have no evidence of it being true
Still waiting on these, I'm hoping to see one that can outperform my gtx 460se with lower wattage drawn :).
... Trying to keep my pc under 150w's.
http://news.techeye.net/chips/nvidia...-perturbations
Dont shoot the messenger, just fall on this article, and share... Even if i have a lot of doubt about it.
:rolleyes:Quote:
trying to buff up Kepler by bringing Ageia to the hardware.
They've got CUDA, why on earth would they put outdated AGEIA hardware in Kepler. Sounds like FUD to expose moles. Bazinga.
Sounds good. I like rumours like this. Just try not to invest too much stock into them cuz it sucks when they disappoint ! ;)Quote:
One industry watcher said to TechEye the company is in "holy s**t" mode - having been confident that the GK104 would fight off and trounce the competition, but the timing is out of whack.
....HOLY S**T mode engaged !!! Good for them ! :rotf:
charlie is saying something quite interesting.
usually people think of ATI cards as being great in some and really bad in others, while nvidia are just good in all.
but he believes the tables are turned and that the card will basically suck in any game they didnt have their devs help code
however i wonder if the games that do run great will also generate a lot of heat and push the cards to their limit (like running furmark on a 4800/5800 and hurting them). too much efficiency can push a gpu out of its thermal specs.
PhysX enabled games faster on nV hardware ?
I'm sorry but that's 5 year old news ...
what titles use hardware physics thats not phsyx right now?
another thing i found interesting is that this card doesnt seem to be the great 7900 series killer some made it out to be. alot of people were wondering how a $300 card could compete with a $450-$550 card and now it seems to that this card was never meant to compete with the 7900 series. it also interesting that it can hold its own against the 7900 series in some benchmarks when its probably going to be marketed to compete against the 7800 series gpu's.
hopefully sometime early this summer the gk100 will show and we can finally see how the single high end gpus stack up against each other.
I launch a poll: 2012? year of marketing ? ...
If all this is true, it is really sad, Games was made for use the GPU's power, not for limit it in some part and accelerate it on other ( and sadly we have see this too much allready in 2011, crysis2, HawX, Batman, Starcraft, Anno ( for AMD).. )
I hope the market will come back to a certain normality, we dont need games for sell gpu's, but games for play them... I dont want to see a war between Gaming evolved ( AMD ), and TWIMP games, we will be the clear loosers on this.
Its clear, this is an abundant source of money for games producers ( and help ), who have allready to deal with a lot bank for produce a game. ( an average game, cost nearly 2x what Avatar have cost )
obr Says
Quote:
Semiaccurace is absolutely wrong about Kepler ...
Charlie Demerjian knows nothing about Kepler, just speculating. In essence, copied the information from my blog. I first talked about the problems with PCIe 3.0. I talked about the new PhysX feature, a small core with low power consumption. But the truth is different, the source from which he drew (MSI) (the same as my is faked by Nvidia itself). Kepler has no PhysX block, and its performance is great against Tahiti everywhere - not in games from RG, the performance is still true what I said long ago, an absolute 7970 killer ...
PS. This is last Kepler info here till some chinese real leaks, with spec ... but i can tell you, REAL specs are floating web ... but you have to find them ... :)
Lol, does OBR remain serious ? this guy should have 2 bodies for wear a so big head.
How can he have any idea of the Kepler performance ( he speak about it since 2 month now ), if the A1 silicon was bugged, and the A2 sillicon have been out of TSMC less of 4 days, 1 week ago?
In my opinion, Nvidia would have released their "paper lunch" of the Kepler if they thought it would destroy 7970... Usually, paper lunch makes fan boys wait for the card... This time around either their marketing department was asleep or they just don't have the horse power ???????? I SERIOUSLY WANT KEPLER TO beat the hell out of 7970 so that i can replace my GTX 580 with a worthy architecture... THis 7970 is like a half boner for me, i want it but its not giving me the full pleasure.
For sure there's a lot of thing unclear: why bring us on the table words about GK104, the midrange, why not speak about the real kepler, the high end? If kepler have some revolutionnary thing in it, why Nvidia dont push them in hardware tech conference, computing conference? Computing and professional gpu's are a big part of Nvidia market ( Tesla, Quadro etc ).
AMD has no problem to show us GCN 7 months ago, in June 2011.. We was nearly know all of the architecture, even code generated by it was shown, we was know it will pack 30-32CU.. etc etc. Each parts was explained...
Thats because there is no 7970 killer incoming. Its all just fairy tale created by nvidiots. There is no official info whatsoever. That should tell you something.
No official info tells you exactly that, no official info, nothing more or nothing less.
If NV released some sort of sneak peak, you would discount the results as cherry picked. If they paper launched, rumors of magical driver or higher clocked BIOS update would surface in a matter of minutes...
By now, both companies probably learned, there is just no pleasing some of the people.
I dont speak about performance, but as what is the architecture... this is why i was made a comparaison with AMD who have show us GCN architecture in June 2011... Im sure a lot of peoples in server, computing, professional graphism will be interested to know what this new architecture can bring as feature on the table.. ( starting by me for my job... ( well im a DAO, CAO designer Autocad )
lol man :D......
Why don't you forward these questions to Charlie or everything said by Charlie is true and by others are not, you have to build your analysis on data and basic information not on personal point of view..if I take your word then that means what others say other than Charlie is false and not logical which is absurd..be more reasonable and logical.
Nvidia Kepler GTX690
750MHz Core clock
2×1.75 GB 4.5GHz GDDR5 Memory
2×1024 Stream Processors
2x448bit Bus Width
Priced at $999
-----------------------
Nvidia Kepler GTX680
850MHz Core clock
2 GB 5.5GHz GDDR5 Memory
1024 Stream Processors
512bit Bus Width
Priced at $649
------------------------
Nvidia Kepler GTX670
850MHz Core clock
1.75 GB 5GHz GDDR5 Memory
896 Stream Processors
448bit Bus Width
Priced at $499
-------------------------
Nvidia Kepler GTX660Ti
850MHz Core clock
1.5 GB 5GHz GDDR5 Memory
768 Stream Processors
384bit Bus Width
Priced at $399
--------------------------
GTX 680 45% Faster (Radeon HD 7970)
@BHD
And your source is? Not that I don't believe you or anything,but 7970 is ~34% faster than 580GTX in 25x16 with 4xAA/16xAF. In order to be 45% faster than 7970,single GPU 680 needs to perform around 1.93x better than 580GTX which even though is not impossible ,is highly unlikely. You have 2x the "stream" processor count but you have roughly the ~2x less shader clock which is now in sync with the rest of the GPU. You have almost 84% more memory BW though so there is close to enough of it for this to be true. But SPs have to be massively more efficient to net such a huge gain over GF110.
My Friend Informal
Source >
http://lenzfire.com/2012/02/entire-n...se-date-43823/
That would be the GTX660Ti, and it looks it will be priced U$S399. Based on what BHD said, if a GTX680 is 45% faster than HD7970, that would mean a GTX660 will be ~5-10% faster.
The price looks pretty nice if you ask me. And it also makes sense. It has 33% more SPs than a GTX580, if you factor in architecture optimizations... it should be faster than HD7970 (which is 25% faster than GTX580 on average)
That's why a new AMD sku makes sense. Every HD7970 is clocking 1200Mhz easily, thats 30% more clock, that usually translates to 25% more performance. That hypothetical "HD7980" would perform between GTX670 and GTX680. Just what AMD has been doing all these last years...
[/Speculation]
Looks really good. I hope it's true. Still, only 2 GB memory :(
a chip with 1024 SPs would be probably 20% bigger than a 580, and that took a year to come out and why we had the castrated version called a 480
Interesting according to the website GTX660Ti and up all are 550mm2 so that means its the same chip, 7970 is about 365 mm2. GTX660Ti does seem to be the logical fighter for 7950 not 7970 that 10% better than 7950 might be on specific tests. I do expect GTX670 to exceed 7970's performance and GTX680 to destroy 7970 "If the charts on the website are true"
In theory, I don't think it's that unreasonable for it to approach double Fermi performance (Say +80% ) with those spec's (and accompanying huge die size). What I do think would be a problem in reality is thermal headroom. whilst you can double up your shader count from one node to another, it usually accompanies a marked power consumption increase, and even fermi 2 (GTX580) is pushing the boundries of what's acceptable (With the GTX480 clearly demonstrating what's not acceptable)
So IMO what would make or break it is how much they've improved their performance/watt of the architecturen, as they've been behind AMD for some time in this regard (though not by much )
I think 60-70% faster would be more realistic expectation, which would still make it considerably faster than the 7970. I don't think AMD would respond with anything wild though of course. They have no interest in pushing limits in die size and power on GPU's.. Their strategy seems to be release something early, and enjoy a comfortable lead for ~6mths, then sit in the sweet spot when Nv finally releases their monster.
A modest successor / refresh to the conservativley clocked 7970 would be enough to maintain them in their previous position for single GPU I think (6970 vs 580, (5870 vs 480) (4870 vs 280) etc
Really hard to believe. Too much information about the entire line up too soon.
We have never known really that much information about a a lineup this early. Especially performance numbers and most importantly, accurate numbers.
Not to mention Nvidia suddenly being ultra-coy about their amazing performance :rolleyes:
I'm impressed if this stuff's true. Still, two months in advance of launch, seems... suspect?
Come on really, if they had this kind of performance they'd show it. Just one official benchmark - they did it with Fermi 4 months before the 480 was released, showing how it pwnd the 5870 in Far Cry 2. Surely they can show how amazing they are in any title right now? Just 1 official benchmark, that's all we ask.
Whatever if this is true or not, the TI version is what the TI version was for the 560TI, a 470 or if you like more a castrated 570-580.. I doubt they want to launch something like that, the 560TI was a card for use bad cores who cant be used on 570-580. other than that, this is a serious waste of money, take a 550mm2 chips cost and laser cut a good part of it..
You do that, when you are sure to have xxxxx numbers of core who cant pass the test for been on the 670-680 ... you dont think do it before that. Cause you have no idea of how many cards you will product. If you need take the good cores who can be send on the 670-680 line, you loose a lot of money and wafers.
GTX 680 45% Faster (Radeon HD 7970) :D But 1000$ price for DUAL GPU :rofl::down:
Lordy, gonna need a 2nd job for my next gpu upgrade.
If those specs are true, it looks nice, but again, whats with the low VRAM? Seems like NVIDIA did not learn their lesson from GTX 580.
64 ROPs thats alot
7970 is only 32 ROPs I think AMD needs to up those and not wait 4years like they did going up from 16 rops in the past
I think 45% is a little inflated and prolly cherry picked results over all 25-30% faster is expected and realistic I guess. With double the stream processors and arch improvements I think we can expect 170-180% performance of 580. 2gb mem is going to be less for 25xx rez ?
Except the gtx 480 was also a very flawed chip. I think they did that more so to stir interest for a product that they saw as a flop and a stop gap until the gtx 580. Note that they were completely silent about the 580 and we knew almost nothing official about the 8800gtx - their two best products in the past decade imo
Finally, after many months of waiting, we have the next gen Nvidia specs.
Kind of disappointed with only 2GB of RAM on the flagship card (GTX 680) as the 7970 and variants of the GTX 580 have 3GB. I'm planning on upgrading from 3x1 Surround to 5x1 Surround (hopefully Nvidia will allow this). Here's hoping for additional aftermarket RAM.
Wow grain of salt much?
People are eating this up that easily?
I might live in a bit of a shell when it comes to good websites breaking news but I have never seen lenzfire up there with the leaks, that and the specs they show off (and their "sources") all seem way to good to be true.
-PB
The specs core wise is right in line with what Nvidia has been doing, doubling core count with process shrink. However I'll believe the specs when its released but the quoted specs are well within reason based on previous hardware revisions.
At this point it's just as baseless to say the specs are wrong as it is to say they are correct, we don't know for sure either way.
Those specs are baised off a few diff things, all rumors though I think...
1. Shader count is doubled, seems biased of a rumor I read a while back.
For example, 384 x2 and 512 x2 equ to the amount shown in the above specs.
In reality you never know, nvidia could change this within a month of launch, say they screw up on yield and have to..., 992 instead of 1024 for example.
2. GPU clocks are likely from another piece of rumor news, I wouldn't know, never paid attention to the clocks (I don't honestly pay any attention to those anymore, at least not right now).
The mem clocks could be values someone picked out of there @rse
3. Mem controller crossbar width is listed like it was the last few generations of cards.
It looks to me like somoene pulled together a spec or 2 from rumors, then compiled a list of so called next gen cards baised off that info and info about the current gen of cards.
To someone like me all I'm interested in is the wattage, amount of mem, mem crossbar bit width, and the amount of shaders.
I won't know that info until after it's released, finding info on the wattage is usually not written in places easy to find...
Not really interested in replacing my furnace with an electric vga card of a heater for my house...
Eh whatever though I guess.
We should start seeing some real leaks soon from the chinese, just assuming.
Quote:
Lets analyze this boys
- Kepler does not have hotclocks - Chart has hotclocks for all kepler parts, when Kepler doesn't have hotclocks, check
- GTX 690 will be the flagship Kepler, on a single PCB with dual GPU - Chart has the GTX 690 using a 224 bit memory bus x 2 - check
- GTX 660 will have a (224 bit) memory bus. Not even possible - check
- Stream processors? Really? Cuda core counts - wrong - check
- GTX 680 has a 512 bit memory bus, while the dual GPU version the 690 (GTX 680x2) has a 224 bit memory bus - lolworthy
The kicker really is their GTX 690 specs. According to that chart it has a 224 bit memory bus with 1.75gb of memory times two. Hilarious.
Yep this sounds plausible.
Post from another forum.
45% is thanks to simulation. Real numbers will be lower.
Has there ever been a launch where so many sets of different specs floated around? I still feel we don't know anything at all about Kepler.
Many times...
G80 (no one had a clue what was going on)
RV770 (480 sp , 800sp 480sp, then finally launched at 800sp!)
Cayman (1920 shader rumors etc )
to name a recent few
Evergreen and Fermi were a bit more predictable, but the chopping off of SM was a last min surprise IIRC
lol how many aussies are in this thread clutching at hopes of a killer vid card on the horizon.
Cheers boys.
:up:
Anyone know how Nvidia will price this card?
"NEVERMIND"
Ewww 650 and only 2GB? cmon for that price it should be 4GB....
Please look at this review
http://www.hardware.fr/focus/50/test...-surround.html
The difference between GTX 580 1.5GB and 3GB is negligible even at extreme high resolution with 4x/8x AA
In other words, 1.5GB of ram is more that enough and yet you are complaining about the 2GB ?!!
If GTX 680 is really 45% faster than HD7970, then the price is very attractive. Actually, even if it is 20% faster the price is still not bad.
2GB memory is more than enough. 4GB is going to make it more expensive to manufacture, and you will probably not see any performance gain.
Doesn't even make much sense if thge GTX 660 has 2GB and the GTX 670 only has 1.75GB, WTF? I call fake on these specs.
I've read more pissing into the wind speculation than I care to in this thread, so to try and plant peoples feet back on the ground lets reiterate and remind people of a few things;
1. Even the GTX560 Ti struggles to break 1GHz on the core even with increased voltage and good cooling, so "well above 1GHz as standard" on the core with the next gen? *chuckles* Lets be realistic, default reference clocks will likely hover around 850-875MHz for the new midrange cards.
2. 2GB midrange cards, very nice in principle but lets not forget the price premium between a 1GB and 2GB GTX 560 Ti at present, they are £140 for a 1GB MSI Twin "Frozr" II and £206 for a 2GB Gainward 560 Ti. Those are the cheapest prices I can find as well. That extra 1GB vRAM is not worth a £66 price premium. If the difference was £20-25 (like it should be) then yes. £66 though? LOL!
3. Next gen performance... come on people, after all the effort nvidia put into getting GF104/114 etc halfway decent do you think they are going to throw millions away on a completely new architecture so soon? Of course not. The architecture will get a heavy revamp I'd think but it won't be all whizz-bang entirely new "OMGZor look-at-this!" architecture.
Bottom line: I'll expect a 20-25% performance improvement with next gen midrange cards over the current GTX 560 Ti. Perspective people.
I wanted to address some things here. Not saying whether ANYTHING is accurate or not....just debunking some things.
Debunking a rumor with another rumor....nice!Quote:
Kepler does not have hotclocks - Chart has hotclocks for all kepler parts, when Kepler doesn't have hotclocks, check
Why is a 224-bit bus not possible?Quote:
GTX 690 will be the flagship Kepler, on a single PCB with dual GPU - Chart has the GTX 690 using a 224 bit memory bus x 2 - check
GTX 660 will have a (224 bit) memory bus. Not even possible - check
The kicker really is their GTX 690 specs. According to that chart it has a 224 bit memory bus with 1.75gb of memory times two. Hilarious.
7x 32-bit memory controllers = 224 bit bus width.
If you couple each of those controllers with a single 256MB IC, you get ~1.8GB of memory.
I'm not saying it will happen. Rather, I am saying we should all refrain from making blanket statements about what is and isn't possible based upon the architectural limitations of previous generations.
This thread is for rumors about GTX780, please no more rumors about GTX6XX series, they're outdated. :D
Well it does look like they're taking their sweet time so I wouldn't be surprised if they jumped Kepler to 700 series. I just think that's way too consistent naming for Nvidia. :ROTF:
Lol
Enjoy. Some of it is spot on, some of it, who knows, but finally there's something real. (=confirmed)
http://i41.tinypic.com/9h6xe9.png
Nvidia made a 6970?
It's old, I know. It's funny though, cause the same thing happened at least once already. Unbelievable spec show up, people discard them, ask for real info and few weeks later, somebody tells them, they had them all along.
The same source also confirmed the rumors about the $299 price. It solely depends on the final clocks, obviously, but it's a possibility.
So I'm guessing the GK104 specs are right with the exception of TMUs (could be 96), TDP (around 170W) and clocks (due to A3 I'm guessing between 1100 and 1150).
Hot, cold? :D
So because Chiphell post a table that == legite?
-PB
The exact same table have been allready posted the 2 jan by Olivion... http://www.xtremesystems.org/forums/...=1#post5026626 post 35.
The source was allready coming from chiphell: http://www.chiphell.com/thread-338350-1-1.html
I discarded it the first time I saw it too, but something has changed.
Somebody trustworthy confirmed them.
Very interesting indeed! But not everything in this table is correct?
For example, in the case of GK104, they have the bus width and ROPs right, but the TMU number is different. TDP will also change after they finalize the clocks.
That fits with the info a guy gave in another forum. He said, one column is almost completely correct, the others not so much. I guess he was talking about GK104 column. I still bet on 96 TMUs and <200W TDP.
Ok, somebody confirmed ROP count and bus width specs, but what about the most important specs like CUDA cores count or hot clock absence? :shrug:
For GK104, this time their marketing team won't use "DirectX 11 done right" (as they did with GF104) but "Cayman done right" :P
With that specs (and if efficiency per unit is not completely down the toilet) GK104 should be able to approach the 7970 sometimes but mostly duel with the 7950 and win slightly.
Has anyone got links as to where the no hotclock rumour originated?
-PB
Seems AMD/nvidia arch is converging...
Is the gk104 still due out this month?