Haha, I know the feeling.
Forgot the source link for anyone who wanted it: http://www.facebook.com/pcper
Printable View
Haha, I know the feeling.
Forgot the source link for anyone who wanted it: http://www.facebook.com/pcper
what time is it there now lol ? just get a update ''Soon'' by pcper again a mins ago.. Ryan Shrout also says the same on Twitter if anyone wonders..
Gah, turns out it is the mobile one for the review..... oh well.
It's a GT640M preview, based on a GK107.
http://www.pcper.com/reviews/Mobile/...Arrives-Mobile
Hot clocks are still there! And it looks hot indeed...
lol PC Perspective says.... " Desktop will have to wait a little longer."
Oh, forgot we kept things in one thread...
http://www.brightsideofnews.com/news...-gt-640m-.aspx
Perhaps a bit more accurate.
Blah, blah, blah. Nvidia is definitely price-gouging this "generation." So what else is new?
Only 2GB = fail. Nvidia Surround says, "Oh, come on!!"
GTX 670 being renamed "GTX 680" = double fail. Nvidia fans say, "@#$%!!"
AMD being put in there place = nothing new. Nvidia says, "BAHAHAHAHAHAHAHAHAAWW!!"
This thread = epic popkernz. I say, "http://1337gif.com/images/gif/Popcorn-Deer-52.gif"
One thing I wouldn't mind happens is if for some odd reason, AMD gets beats bad enough that they decide to make huge monolith designs to compete with the big chips from Nvidia. I would love to see AMD to make a 500mm2+ monster. Their high transistor density and performance per watt should translate well into larger chips. If we are going back to the days of 600 dollar graphics cards, I want these cards to be monsters that can replace midrange dual GPU solutions. This would hopefully push these new cards, back into midrange status and down to 400 or less.
Im still waiting for a single gpu that can push 120hz in BF3 ..... maybe this round wont do it either...
:(
So according to pcper numbers Kepler is 15% less efficient then Fermi per shader and GHz.
Actually with one 7970 you replace without any problem performance, cost wise, 2x 5870, ( and so 2x 6870 ).. i have test it waiting my second 7970,.. outside some rare case where you get some few fps less ( Dirt3 ), a single 7970 is faster and sometime a lot faster of this crossfire setup. synthetic wise, 3Dmark11, vantage, unigine: i was beat at stock my 2x 5870 at 1ghz+. In a lot of recent games, ( Skyrim, BF3 etc ) the 3gb make a huge difference vs the 2x 1024mo ( not shared in cfx or sli ).
Actually a single stock 7970 is a lot faster in general of 2x 5870 ( OC ).... You will find some "old " games where it is equal, but in general it is faster.
Max settings, no way: this mean 300% faster average fps of a single GTX580, lets forget minimum of 120fps.
Hard to believe they'd manage to improve the performance/watt ratio as well as performance/mm^2 THIS much. Quite impressive if that's accurate.
Been corrected by Theo from BSN, and that's make sense as it is not what was report Anandtech ..
BSN
http://www.brightsideofnews.com/news...-gt-640m-.aspx
Anandtech review
http://www.anandtech.com/show/5672/a...kepler-verge/1
Do you have links to some data that says VRAM use is over 2GB in Surround, barring isolated examples like "game X with texture pack Y at 8X MSAA". Point there being I can probably find settings where 3GB are exceeded, or 4GB for that matter.
I've done a lot of surround gaming at 57X10 and 60X10 (most common res for surround, 1080PX3) with 1.5GB, 2GB, and 3GB. For the most part, 1.5GB works at 4X MSAA. I don't think I've ever seen problems at 2GB or 3GB.
Not to mention for all of 2011 pretty much everyone and their brother was saying "2GB is the thing to have for surround/eyefinity".
If 680 ends up at 2GB, that won't be a factor in my buying decisions.
interesting that kepler is in a shipping notebook today.
also interesting is the continued reporting of a dual GK104 board coming quickly. you'd be hard pressed to call that anything but 690. so given that all the 600 numbers are about to be used up, the next big chip will have to be 780. thirdly interesting is whether we'll see that before 2013. paying full price for not-the-big-chip does kind of annoy me.
The GF104/114 was more efficient than the GF100/110, perhaps it was planned?
AMD does not publish the actual TDP of their cores. The 250W is the maximum sustained power input which the board itself is rated to handle. The 210W is the "PowerTune max" consumption and can be controlled (increased / decreased) via the dedicated slider in the CCC's Overdrive panel.
http://translate.google.fr/translate...m%2F18768.htmlQuote:
but based on feedback from manufacturers GTX 680 easily exceed the HD 7 970 is no problem
The name 670 Ti was thrown out there by Nvidia to confuse the competition. Come on, that was easy :)
Why would that confuse competition?
Ask the author, I'm just translating.
When unicorns are involved things get confusing.
Nope. That comparison is decent because it compares the full fledged chip in each instance. If you were to compare a 7950 it wouldn't be as efficient as a 7970, at least surface wise. As it is, they compared every top bin, which is the think to do when comparing chip efficiency based on transistor count / die space.
Yeah, I know man... for example, why is this unicorn attacking things when it's supposed to be making video cards?
http://img254.imageshack.us/img254/2...cornplushb.jpg
Was the mouse attacking the factory? Was it trying to steal a Kepler sample?? I'm really confused!
So..there is nothing sure even if the launch "should" be close
If Nvidia doesn't make a big Kepler then this round is shaping up to be much like the last where Nvidia leads by 10-15 pct on the flagship. The difference is that unlike Thermi, now they have a very efficient chip much like Amd.
What I am still left guessing is how they achieved such great efficiency, something must have had give. My guess is that GK104 will be inferior in GPGPU computation than Fermi and will lose a lot of its features that are oriented towards the professional segment and stream computing. This chip will probably be geared only for gaming which is likely how they managed to save so much die space and still make a powerful chip.
New uarch ushers in new/different standards in power and performance for Nvidia, similar to how Intel's conroe core2 based units changed allot of dynamics from the P4 uarch.
Nvidia is very heavily invested in gpgpu, there's now way they will sacrifice in that area especially now with AMD and Intel taking the whole gpgpu more seriously.
Does really peoples buy a GTX580 for professional computing ? or do they buy Tesla quadro ?
Same die, that's what matters.
^^^ the difference??... I am no expert on it, I've only used Quadro cards with my friends doing rendering on Rhino. So I'll let someone else who's well versed on this answer your questions.
The biggest issue I see when trying to run a consumer grade video card with professional software is consumer hardware usually lacks the software developers certification/validation which is needed to enable specific hardware accelerated features.
So while the consumer card should be able to do everything the professional cards do without the certification/validation you can't take advantage of some or all of the hardware accelerated features possible for a software package.
http://www.xtremesystems.org/forums/...=1#post5068055
It will in mine. I'll stay with Nvidia, but, wait for the non-reference cards with 3, 4GB+ VRAM. (See GTX 580 3GB)
It used to be easy with rivatuner but eventually you simply cave to buying the hardware you need to avoid the headache and hassle since you need to have validated/certified hardware as well as specific revision certified/validated drivers. I'm sure if you're determined enough you can jump through some hoops to make things function but at some point it's not worth the effort.
I'm guessing now with the professional market much more developed than earlier days they seem to be locking out features and limiting consumer grade hardware from what I've seen. Up until around Nvidia's 2xx series it was rather easy but with newer software revisions and newer hardware generations they seem to be forcing a separation of the consumer market and professional markets even though the hardware is fundamentally the same.
Several others also pointed to this same date. So this maybe it.Quote:
Kepler family will be at 6:00 am Pacific Time on March 22, local time at 21:00 on the 22nd lifting of the ban
^^^ A logical assumption taking into account AMD's release of the HD7xxx cards, but it seems that even with the current performance leading card... AMD continues to loose discrete market share to nVidia.Quote:
GK104 is seeming more and more like a stop-gap card to keep Nvidia from hemorrhaging any more customers to AMD.
http://www.techspot.com/news/47593-j...ket-share.html
I'm honestly not too happy with either company at the moment... AMD undershot and over-priced, and NVidia are going to be able to dominate it but won't release their high end because they don't even feel the need to. The blame seriously does sit on the shoulders of AMD though.
Bad time to be a consumer, but a good time to be in the business (or be an investor).
What's so funny about a company's midrange card being powerful enough to topple their oppositions high end?
This was market share for Q4 2011.. (ended on dec 2011 ) the 7xxx was not even out at this time ...
With Trinity coming, i dont think the discrete cards will say anything then in the market in the future, as AMD have a good part of his entry discrete gpu sales, who goes on APU now. ( AMD lead over Nvidia in global gpu market share ..( 25.8% vs 15.7% ).. And when both Nvidia and Intel have seen their sales down at this period, AMD was the only one to gain marketshare.
@Diltech: If we trust what Nvidia have posted : GTX680 is the high end, this is the flagship card ... Is it gk104? is it not ? lets wait the 22th for know all . ( if date is confirmed ).
Attachment 124632
GTX680 25% faster than HD7970 in average
Prove it?....
nice photo, a photo is not a bechmark though.
is it me or has the pcb layout changed substantially since the 500 series?
Why is there a paper covering the PCIe interface?
What I like about this layout is that for waterblocks there is no barrier for the waterchannels (like with the Gainward Phantom) from a wall of capacitors. They're neatly on the edge of the PCB.
One week of wait remaining ... ^^
@ WeiT.235
thx, more please ;)
25% + average would be very good for a GK104 based card.
Sure on average...
So the new performance chip from Nvidia is 50% faster than GTX 580 in average, with half the bus and almost half die size. Impressive if true.
I hope it is true that it is 25% faster than a HD7970. That means the HD7970 will drop in price and so will current cards, by a decent amount. If you could get a HD7970 in the $400 to $450 range, then maybe the GTX580's will drop to $300. Competition is good for the consumer.
Is NVIDIA bringing yet another version of AA into the game?
Attachment 124633
http://bbs.expreview.com/thread-49888-1-1.html
FXAA isn't what I'd call new... I guess they'll finally officially put it into the nvidia cpanel as these cards launches and no that's no way MSAA 8x to the left.....
I would rather see SMAA being developed, FXAA isn't that great, way too blurry as you can also see in this zoomed in picture even.
it could easily be 25% faster in one benchmark. that happens all the time! meaningless statistic.
Lets be honest, guys. Does anybody believe that a GK104 can be much faster (say, more than a 5%) than a 7970 while having LESS power consumption? I don't believe in miracles whatsoever, and this is something that has never happened before, not accounting for failures on either side. Either way, performance/power-consumption is more or less even within the same node for both brands (provided there are no mistakes, such as GF100 or R600).
Keep in mind that Pitcairn has an efficiency similar of 7970...which means that none of them have weird efficiency and, thus, it should be kinda easy to extrapolate the max performance of gk104 based alone on such numbers. I frankly expect around 7970 numbers with...MAYBE, a tad of less power consumption. But expecting to take less power and perform 25% better...well, they say that its free to dream :)
the picture is not a slide, its taken by a camera so keep in mind some lighting trick happens there.. and yes you are right there will be FXAA option in control panel but there is also another AA option as well, which starts with an delicious animal name lol..
Well at least I know FXAA will be added but perhaps there's another entirely new method we don't know about too but that sounds somewhat unlikely.
- Timothly Lottes is from nvidia is actively participating in FXAA development, v4.0 seems to be quite a step forward
- The FXAA option hasn't been available in Nvidia cpanel but through registry tweaks => will probably be added to nvidia cpanel with the first official GTX 680 driver
- My guess is FXAA v4.0 will be announced/released at the same time as GTX 680
yep, taking everything with a grain of salt at the moment, early news has a habit of proclaiming "up to 80% faster" when you dig into the validity of the comment would translate roughly into "up to 80% faster, at physics, in 1 synthetic test, on a full moon and Shakira's Hips don't lie playing in the background, otherwise ymmv"
Its Nvidia they always have more, faster and greater than the rest as they say even a 100dollar card is faster than their competitions fastest high end card. Its all about how you measure your e-penis simple as that.
No one can beat the Nvidia card not even themselves told by the italian nvidia as its UNBEATABLE!
We are now stuck forever with this card, the evolution is now starting over with Vodoo graphics, time is reversing itself in 5 days.
I am soon 15 years younger, weee.
GK110 is High End... expected from August to end of this year...
As Nvidia seems not to have a problem to get in front with their small GK104... GK110 will show up later with no pressure at all =/
Considering the rumoured GK whatever number isn't even in production (rumours) means that the current 104 is their high end chip.
Also if you believe Nvidia is happy when they can do a draw with AMD you are wrong. If they could release, push out a faster card they will do so as soon as they humanly can. They want to win on everything and not on luck and limited to 1080p.
As it looks now the rumoured GKwhatever number of Nvidia will be closer to the release of AMD 8whatever number than they are to the 7xxx series.
Why the hell not? Is it really that inconceivable that Nvidia could have have outted a more efficient design than AMD? Really??
You can't base this on past designs alone. That would be like bad mouthing Intel prior to the conroe launch soley on the track record of the pentium 4, " Do you really expect Intel to come out with a cpu thats faster and uses less power than a 64 X2??" We all know how that played out...
GTX 680 25% more fast HD7970 say PcInLife GTX 680 = HD7970@1200MHz?
[align=center]http://we.pcinlife.com/data/attachme...uxm887gs8y.jpg[/align]
With no probes at all...ok continue the Hype!!.I already ahd two gtx 580 for real cheap for the micro atx computer.And i just bought two waterblocks for cheap too.All because people read that next GTX 680 is a really killer one.
If those percentages were right(i hope they will ebcause all you waiting for the card) is almost as much as 50% faster than actual GTX 580.But with no probes is the same as saying I am 100% moore handsome than all you boys.
So please,continue the hype.I will loveto get a tri sli based system on obsoletes GTX 580.I will adopt your porr homeless chicks.It willgo to a good home with an also obsolete Maximus Extreme IV and an obsolete Sandy Bridge Core i7 @5200 24/7(actuallñy enjoying a tri sli of ancient obsoletes GTX 480...).
I hope someone will post real info about kepler someday.Until then it can be called the"Oficial nvidia trolling off topic topic".Bye for now,good luck and lightspeed.
Quoting this for emphasis since I want to bring up some points here:
Tahiti is the first AMD core that puts a ton of focus on compute and DX11 rendering rather than being a unified architecture with some DX11 bits tacked on.
Fermi was built from the ground up for compute and DX11.
AMD is currently grappling with the heat, power consumption and die space (albeit at on a more efficient process technology) restrictions that NVIDIA went through with Fermi.
As mentioned by NVIDIA in several public presentations, Kepler is meant to be a second generation "ground up" architecture which essentially means they have learned from past experiences and have focused upon perf per watt. At least that's what this slide from 2010 says.
AMD is in many ways still on their 1st generation dedicated compute / DX11 architecture due to Tahiti / GCN skipping 32nm and instead waiting for 28nm to come around.
Take that as you will. Those are just the facts. :)
Makes sense to me (:
So nvidias new 'ground up' arch is sorta based on fermi so that they had a good starting point from to make this round of cards
Whereas this is amd's completely new ground up arch on a new process and although it performs great i'm guessing the Next arch which will be based on this arch should be a lot better as they theirselves have a decent place to start from
Man, isn't it fun to speculate?
Anybody wanna sell their EVGA GTX 580? Thinking SLI 580s would be good enough for me :) The way that this thread is going, the price of this card is going to be astronomical.
That's what I'm waiting to see as well; supposedly it looks like I am going to get back a pair of GTX580s from RMA after one of my 2 GTX470s died.
That and I'm waiting to see how virtu mvp does in SLI if and does it work; cant test it as I'm down my cards until next week
wow so much excitement over kepler (when you got 3x 3gb 580 it dont matter that much :D) guys kepler is nothing its only 2.5x the fermi
maxwell on the other hand is supposed to be 10x the kepler.. now that one is going to be interesting.. prepare yourselves for next years internet buzz its going to be nuts!
Goodbye single slot water cooling...
hahaha..
I'm excited to possibly own an Nvidia card after 8 years of AMD, I hope all the hype is true.
Except for the price.. That one I hope is not true...
I highly doubt that the gpu that Nvidia planned to use for their midrange product is 25% faster than a card thats already 10-30% faster than their previous gen top card. I've never seen Nvidia release a mid range product that much faster than their previous generation's top of the like model. At least not in recent memory.