Attachment 124765
seems amd needs turbo feature either lol that turbo thing was smart idea and really made kepler look better than it really was
Printable View
Attachment 124765
seems amd needs turbo feature either lol that turbo thing was smart idea and really made kepler look better than it really was
^careful, those charts show % increase each card does over stock when overclocked. its not actually comparing vs each other, only relative to themselves
(http://www.xbitlabs.com/articles/gra..._14.html#sect0)Quote:
Comparing the results of the overclocked cards in the diagrams below, you should keep it in mind that our AMD Radeon HD 7970 was overclocked by 24.3/27.3% (GPU/memory) whereas the GeForce GTX 680, by 17.3/18.6% (without dynamic overclocking).
Attachment 124769
Attachment 124770
[Attachment 124771
well here you go oc comparison in action then
ps : something weird with 580 results though lol
eVGA overclock GTX 680 to 1848MHz
http://www.maximum-tech.net/evga-gtx...848-mhz-11099/
Hello processor bottleneck....how are you today? Messing with the Xbit results you say? Bad processor....BAD! ;) ;)
Everyone should also be aware that the overclock speeds being bandied around will NOT be hit in ever game. An offset that gives you 1300MHz will allow you to only run that number in a few games, even with a Power Target of +35%. Most other games will likely run far below that.
I'm not understanding why some sites aren't understanding that distinction....
Any reviews with turbo 680 clocks and 7970 similar clocks?(1058)? I would like to see how both architectures compare.
It seems the 7970 is much less CPU dependent... x-bit have their CPU @ stock
Edit: 4.625GHz on the CPU
Edit2:
"The GeForce GTX 680 2GB overclocked to 1186/7128 MHz is an average 4-11% ahead of the AMD Radeon HD 7970 3GB (overclocked to 1150/7000 MHz) in 1920x1080 and 1-9% ahead in 2560x1600." Xbit
Fairly close clocks - fairly close performance @ 25x16. I'd say 57x12 will be an absolute match on same clocks.
Strange that the nvidia cards seem to more rapidly lose their performance (compared to AMD) the higher the resolution, ever since the 8800... Or maybe it's just my illusion :D
The GTX 680's Boost clock is NOT constantly 1058MHz though. At default it fluctuates from 1006MHz to 1150MHz+ from one game to another.
The only way to accurately measure clock for clock is to determine the clock speeds at which Boost runs within each game and then overclock the HD 7970 accordingly for each situation. That is a HUGE job.
did anyone compare overclocking just the core vs just the memory to see which one is more beneficial?
What are GTX 680 folding numbers like? Please tell me nvidia didnt gimp Keplers cgpu performance in order to beat ATI.
Nope. Base clock remains constant. Think of it as a floor that the card won't pass through. The GPU Boost clock should be thought of as a movable ceiling that can be modified via offset frequencies.
Which is why I am wondering how so many sites can claim such high overclocks. I haven't seen one situation where an offset of 200MHz+ resulted in 1300MHz+ clock speeds across every game....at least under air cooling that is.
Thanks, maybe someone will come up with a bios to turn it off.
that is exactly what came to my mind too... :) set the turbo limit to +0MHz
How do you thank in this forum??? :shrug:
its not a turbo limit, but offset clockrate.
if you set it to 50mhz, then it will range from 1050 to 1150, if you set it to 0 you just run it at stock and it can go as high as 1100mhz
what i suggested was tell it that the TDP is much lower and it may prevent it from overclocking at all.
and to help it out more, maybe you can force the fan speed lower so that it heats up, but not past its safe range where it likes to throttle.
but then you have to also warm up the gpu first before you benchmark
thats alot of work right there just to get a mhz vs mhz comparison. which honestly we cant use in real life because even if people set their 680 to 1100mhz, and someone else sets their 7970 to 1100mhz, the 680 will not stay at a static rate for them anyway.
i would just look at what people are getting in typical uses, or start up your own thread about max 24/7 clocks and make a spreadsheet.
from what ive seen a 7970 will need a high resolution and 50mhz lead over 680 to be equal. but only for a few games since some love one card vs another.
the thanks button is under my avitar
thanks!
I'm either blind or it is not there :D Attachment 124772
o its not there cause you need 100 posts first
otherwise we just make clones to pat ourselves on the back
Well, the power target can actually be set into negative territory in order to limit clock speeds and power consumption. However, no matter what you set it to, the Boost Clock is a constantly moving target since every game loads the GPU in a different way. Basically, the GPU will always try to find an optimal clock speed based upon situational conditions.
Setting the Power Target at 0% (which is the default by the way) won't allow you to overclock the Base Clock anyways since there is no actual option to change that value.
I am sure someone will come up with a modded BIOS in a short time. :)
Home of the world’s fastest Graphics Card for 855 days and counting. <--LIARS! Haha
Huge grats to Nvidia for this monumental acheivment! I honestly didn't think they would take the crown in every respect, but they managed to pull it off. :clap: With Kepler now supporting 3 monitors, I no longer have a reason to stick with AMD (yeah, yeah, SLI... I know.)
AMD did attempt to steal some thunder today by "Leaking" 7990 info haha
I'm going to take a completely uneducated guess that it will cost $1100 and I'm sure Nvidia will just counter that with a GTX690 for $1000.Quote:
Should the leaked information turn out to be correct, the new dual GPU card will hit store shelves in April.
Again, Kudos to Nvidia :clap:
Just ordered my MSI GTX 680 early this morning. :yepp:
The bottom line is this: GTX 680 is 50$ cheaper on average than AMD 7970.
I dunno... You would never be able to see your crosshairs behind the 2nd monitors right and the 3rd monitors left bezel.
Maybe a game like rFactor will allow the 4th monitor to display car stats and laptimes or maybe Battlefield will allow that 4th monitor be be the game map during gameplay.
EDIT: Made some edits, maybe it could be possible for HUD things
the 4th is for the rear view mirror see.. you get a rear view mirror and place it over your center monitor, and then place the 4th monitor behind you
that way you can always watch your back, great for FPS or racing, or flight sims, WoW, anything except a strategy game i guess.
If anyone's interested, after much toiling these past two weeks... http://bit.ly/GTX680Review
That would be super cool but also much more demanding. Most game engines only render the scene that is in your FOV with a slight overlap. Traditionally, most racing games with a rear view render the scene really half-assed.
That's not to say that it couldn't be done. :D
I wonder how well this technology is going to translate into the Mobile market. If keplar carries it's success into the mobile market or tegra, they might just become a market leader here.
http://i829.photobucket.com/albums/z...ye_GTX_780.jpg
"The best is yet to come." ;)
My fellow Japamd, just say that AMD has an answer ready for Kepler and is not a Dual GPU ...
http://img18.imageshack.us/img18/8572/jhonb.jpg
Yea it's a gigahertz edition of 7970. Hopefully its a little more than 1ghz, with some memory clocking too
dual 8-pin, 8-phase GTX680 here
http://product.pcpop.com/000337190/P...html#005200792
Looks like all of the reviewers are coming out with consistent results and data. The GTX 680 is a nice little low-temp, low-heat, efficient response to the 7970. However, my interest really lies with the GTX 780. I think we've got another GTX 480/GTX 580 situation on our hands (with the same architecture being released and renamed to different series within the same year).
The good racing sim's out there are still running around on DX9 and outdated graphics. It doesn't take much to max out iRacing over 3 screens and have it run +120fps.
Easiest was to make the 4th monitor a rear view is to open another instance of the sim and run it asa spectator. Then just change the view until you get the rear view of your car.
it was a complete joke that has a slight bit of 'maybe that is a good idea' while still being only ridiculously useless for the majority
some racing games actually have mirrors and i like them so i can block, but they are usually very small and blockly looking inside.
Framerate is really important in online sims. I'm waiting for rFactor 2 atm.
That rumor is funny because the were also trying to guess GK-104 and they were wrong....
680 is out, can we now let this thread die?
-PB
Did you see the GTX680 frequency constantly moving in your tests?
There are reports that it can drop under the base clock during certain situations. I heard about this a few weeks back, maybe the newer drivers fixed this.
A bit more than that... ;)
I'm a bit confused about the turbo+OC
Hardwarecanucks and techpowerup both write that the OC offset leads to increased turbo headroom and activity and that the (guaranteed?) base clock cannot be increased. This would mean that depending on the circumstances you won't necessarily get the most from a given OC (i.e. 15% OC equals 15% performance increase).
However, the advertising slide for this feature shifts the whole curve further up, including the working point "base clock" and the subsequent turbo frequencies. That seems to imply that the new base clock=old base clock +offset is guaranteed. In some reviews OC gives proportionable performance benefits even in very demanding games, in some it does not.
Now what is the deal with that?
Do any overclocking utilities work properly with this yet?
:)
Got a link?
Edit: It works great !!!
and my first 10k 3D11 run:
link
:)
There, fixed my pic lol.
-PB
AMD's best response is to lower the price of 7970 by $150...:D
But I doubt if they can afford that given the expensive BOM of 7970.
450$ 7970 will be enought to make it attractive. Thats for sure. 400$ would be insane.
As usual, shops are making a few extra bucks ^^
http://i119.photobucket.com/albums/o...cis/gtx680.png
Yeah min price $700AUD here :(
Keen for GTX 685 however.
-PB
I couldnt help myself..... P10694 best so far.
:up:
GIGABYTE GTX 680 2GB - $600 or 450Euros.
Quite reasonable
http://desmond.imageshack.us/Himg837...jpg&res=medium
Here in Spain, are more costly and we have no availability .... ....
http://skinflint.co.uk/eu/?cat=gra16...GTX+680#xf_top
For EU buyers.
Cheapest option is MSI from 455€ (tax included) + shipping.
Where are these "reports"? I've never seen anything to indicate a game will lower the base clock and I had the clock speed monitor running through every benchmark I did.
As for the clock frequency, yes it does constantly move as the resources within the GPU shift however, most games "lock in" at a standard clock speed. Case and point, Dirt 3:
http://images.hardwarecanucks.com/im...TX-680-121.gif
All right, this is a bit of a complicated answer so I will try to make it as short as possible.
First of all, increasing the offset WILL NOT guarantee that the GPU Boost will increase clock speeds to that level all the time. It will only do so when there is ample TDP / Power headroom which as we have been discussing is largely based off of the usage characteristics of every game / benchmark. A good example of this is Vince's 1800MHz overclock. If you look carefully at the Clock Speed, there are several instances where the GPU throttles down to drastically lower frequencies as it hits a power capacity wall.
This is why increasing the Power Target is so important; it basically gives you additional overhead to work with. However, the Power Target can only get you so far.
I'll give you and example:
Say you increase the Offset by 150MHz.
This means that games that normally Boosted to 1150MHZ would now be running at 1300MHz and games that ran at 1058MHz would essentially run at 1208MHz, hence the linear offset NVIDIA showed in their presentation slides.
HOWEVER, it will only increase clock to those levels if there is extra overhead for it to go above the default TDP. Hence why you need to increase the Power Target.
Without increasing the Power Target, there is a very good chance your Offset clock won't translate into actual in-game overclocks.
This is also why increasing ONLY the Power Target can lead to higher performance. Here's why:
Typically, the GPU will dynamically boost its clock speeds to stay as close to the default TDP ceiling as possible. Now what happens if you INCREASE the level of that ceiling? Well, the clock speeds have that much more headroom. I have seen situations where raising the Power Target to 120% will increase some in-game frequencies by 7-10%.
Hope that helps. :)
Hah and I even recognised this shop based on that layout. xD If you check my location it makes sense. :) The other cards are more reasonably priced, a bit high but no higher than the typical finnish pricing. However this one's different cuz it's in stock though... talk about price gauging haha.
Yikes! Check this out!
GeForce GTX 680 Release Driver Limits PCI-Express to Gen 2.0 on X79/SNB-E Systems
http://www.maximumpc.com/article/new...ci-e_20_speeds
If this is true, then Nvidia = FAIL on this launch
301.10 fixes that.. also there is a registry fix for 300.99..
http://www.4gamer.net/games/022/G002210/20120323002/
nonetheless i'm disappointed by GTX680, esp because of GPUBoost.. you cant disable it and it is enough to make me never gonna buy any nvidia card unless someone manages to hack this :banana::banana::banana::banana:.. after 4 years break hello to red team again..
well we all know that pcie3 only helps cards with 3GB of memory or more
/sarcasm
this affects only a few people, but it would be nice if they fix it and then we can see how much perf was lost on ivybridge systems. chances are its 5% or less.
everything was nice and dandy until learnt there is no way to disable.. i want full control over my card.. you are stucked at what vendor decided for your card, even the old way was similar at the end, it still feels like there was more freedom.. for starter they should have give me at least +60% power, not +32%.. afaik there is no chance to undervolt too..
Try X8 X8 X8 (X8) Tri or Quad Fire multi monitor set ups along with some SSDs in RAID and you'll soon see the difference. I understand that its only 2% of the users. And I understand that it might not be huge differences at this point. But if the lame ass driver support team at AMD can figure out SB-E support, Nvidia should have. It is simply sloppy and does not inspire confidence when the new High End card cannot run full speed on the current High End platform at launch. I'm not a fanboy of any brand camp so I find stuff like this disappointing and don't make excuses for it. Its not the end of the world stuff...just disappointing.
I frequent your site, visit it a few times a day, you guys made an impressive site. Good to know more of the big guns who participate in our community.
I read the review, you did a pretty good job and the follow up article was quite a good read too. Read the Hardwarecanucks review as well, Sky delivered as usual and I liked his conclusion, was curious about the patched shogun 2 results as that's the only game I play these days.
I didn't even have to take a look at the [H] FPS graphs after those reviews, it was quite obvious the 680 is the card to get. I expected to be reading reviews for hours and didn't even look at anand, or guru3D the picture was clear, NV won on all fronts.
Really good to see NV made a turnaround and delivered this efficient, silent, small and still the fastest card ever.
I am glad I have been waiting what to get and boy it did worth the wait.
just arrived---
great driver
low temperature and noise
great performance
http://www.hwmaster.com/forum/T-Thre...79270#pid79270
i am really happy now!