AMD Catalyst 10.12 driver download links are up - DVHardware
Printable View
"AMD uploaded the Catalyst 10.12 drivers to its website. The release notes aren't available yet but we do know that this release features the new user interface and support for the upcoming Radeon HD 6900 cards. It's unknown if this is the final driver or a preview."
cooling looks like overkill for a <200W card, AIBs are going to have a hard time beating the ref design :)
can't wait until some proper game benchmarks with full setting on a uniform testbed appear
Hey, in case you missed all the great posts because you were just looking at the pictures, here's a picture for you to summarize those posts:
http://i54.tinypic.com/4kznuq.jpg
whereas the emotional common denominator can be summarized as
http://forums.nonewbs.com/media/smilies/fu.gifhttp://forums.nonewbs.com/media/smilies/fu.gifhttp://forums.nonewbs.com/media/smilies/fu.gif
Unlock 6950 into 6970 the same way as GTX465 unlocks into GTX470?! :D
This is for dual BIOS, afaik.
Awesome! Please post the screenshots as soon as you can (NDA lift, etc). :up:
This sounds very promising if true. Bring it on, AMD! :D
http://img707.imageshack.us/img707/7583/3d11p5345.jpg
http://img703.imageshack.us/img703/7361/3d11p5655.jpg
http://img210.imageshack.us/img210/5...tagep22545.jpg
http://img27.imageshack.us/img27/220...tagep23617.jpg
Thanks to Pinto from hardware.fr :up:
http://forum.hardware.fr/hfr/Hardwar...1.htm#t7698486
1450 on the memory seems quite unimpressive
why has gdd5 come to a screeching halt this last year
Is it OC or the PowerTune feature? I mean the 950MHz GPU and the 1400MHz VRAM clocks.
Ask him to try it the latest WHQL driver. He can download it from here.
i sure hope that 950 is not the max on the core. hopefully there is some sort of BIOS CC or overclocking tool limitation on of 950 on the card because an oc of 70mhz on a card that has been cut down to keep power consumption in line would be less the good. Especially considering anything that is Fermi related oc's like nobody's business.
Against 570.
yes and 580 is ~18% faster then 570. so according to the benches 6970 > 580 by 4 -10% while having a lower cost, lower power consumption (possibly) an a smaller die. 6990 will whipe clear the whole floor if those indications and power indications or in the rumored regions.
If the HD 6970 is 25% faster than GTX 570 why not compare it to GTX 580 ?
Settings are odd too ... 2560X1600 8AA 16AF is difficult to handle for the GTX 570 1280 MB, no tesselation, no ssao ...
I don't think reviews will tell us that HD 6970 is 25% faster than GTX 570, but I'm not in the know, for sure ... :D
they compared 6870 to GTX460 1gb and 6850 to gtx 460 768 yet they were slightly faster than the 470 and faster than 460 1gb
it's part of their marketing bigger numbers = better, additionally it has some influence on reviews ("card is designed to take on 460, even faster than 470..." makes it sound better than "it can barely beat GTX470")
that comparison was done before any price drops i think, so first amd says look, for the same price we win, then nvidia responds by dropping prices, then the cards actually launch and were all right around the same perf per dollar.
(if i have my history correct)
10.12 have old catalyst style :S Driver is 8.801 and catalyst ver is 10.12
http://img219.imageshack.us/img219/2073/82892691.jpgQuote:
?
border lands was heavily favored for nvidia, but amd got a huge boost
B:AA, Metro, Unigine, all sound like nvidia favord things too
but like others are saying, hopefully this means the 6970 is right around the perf of a 580
yo boy amd say HD 6970 15% faster than GTX 480 on avg .... SO the Results that came out from here Talk about 25% faster than GTX 480 on avg !!! Do you understand what happened now :rolleyes:
1+Quote:
those games chosen Carefully
If AMD says 6970 is 15% faster than a GTX 480 on cherry picked settings in reality it's going to be barely faster than a GTX 570. I'm afraid those who expect 580-like performance will be sorely disappointed. :shakes:
But as long as AMD gets the prices right a GTX 570 class card is enough for me. And of course for those who want more power there's always Antilles. :yepp:
I've heard that there are 2 versions of the cat 10.12 drivers. One with the new CCC and one with the old one.
I believe you will find those two on http://forums.guru3d.com/showthread.php?t=334401
As I said in the other thread: upcoming Catalyst versions will be released in two branches until the kinks with the new CCC are worked out.
Correction!
They are released!!
I've downloaded them 15 minutes ago from game.amd.com :)
PS. Gibbo from OCuk has hinted that HD6970 can be £100 cheaper than GTX580 on release ....
Someone from MSI did benchmarks? I surely hope not because releasing those numbers would be a breach of NDA and have serious impact on the allocation.
here's some food for thought.
All the benchmarks you see are for the 190W TDP limited card, you can still adjust the card to 250W TDP (closer to the 580), what do you think it would do with the performance numbers?
yeah... there are two versions CCC and CCC2... well nothing new there, almost pretty same, new design but thats all :)
don't know :) Doubt it will do much. I would suppose if it is a power switch, AMD has set it so it would have reached maximum performance with least power. so maybe 5% if it is a power switch, if it would differ alot more they might not even need antilles as a dual gpu. which would be better for there money (1die instead of 2).
Since when does increasing the TDP limits do anything for performance? Overclocking, sure.
You are assuming that any power saving methods AMD has worked up will have an impact upon in-game performance. That would be a huge mistake on AMD's part and one I am sure has been addressed long ago.
Having a throttle at 250W isn't going to serve OC ability at all, it's just a block. PowerTune might be nice for those who want to optimize power usage, but that's not something overclockers are known for. It might enable higher stock clocks for AMD since they don't have to worry about going over TDP, but for overclockers it means there's 0 headroom left.
ROFL i just noticed the thread tags.
:welcome:
Turbo :yepp: :rofl:
no no no you are kidding :rofl::ROTF:
Colorful iGame GTX 460 1 GB
http://img137.imageshack.us/img137/2633/turbouv.jpg
http://tpucdn.com/reviews/Colorful/i...rfrel_1920.gif
http://tpucdn.com/reviews/Colorful/i...er_maximum.gif
I'm hoping those cards OC to hell and back like the 500's do. At 900Mhz for both the 570 and the 580 they are in their own leagues and seems a bunch hit those freq. My poor 470 only hits 875Mhz max, 850Mhz 24/7 and I barely swipe past a stock 480 ( :( ). If those 6900's they can do a good 1100mhz probably without that PowerTune thing on, I bet you they'll be beasts. I just hope most of it can be done on the stock cooler or a decent waterblock.
I thought you were an editor with cards, yes?
http://i54.tinypic.com/1zqaurt.jpg
Now that's interesting, since Perlin Noise focuses on shader throughput
So at +10% power, core clock doesn't fluctuate? But at default, core clocks change?
Feels like turbo boost for GPUs
What a trainwreck of a thread ... please 15th, save me, save us all.
No, I don't say it automatically ups the clocks, it has a default clock and the tdp limit is there to make sure it doesn't go over it (the TDP), hence the redline at 800 MHz, while at default TDP the card will fluctuate between 650 and 800 (more 650 than 800) to maintain the lower TDP and thus limit performance.
If the card redlines at 800MHZ and 220W, you won't get extra performance at 250W, only if you OC.
OOOOH Jesus Help me :DQuote:
please 15th, save me, save us all.
I don't get it - isn't the card's default clock 800? With the default TDP limit it will fail at achieving that consistently and will occasionally fall to 650 levels?
OH SHI- so this means that, even without any overclocking, if you flip the switch and up the TDP limit, you will get greater performance?
Oh right, sorry.
But, everything else was right?
Does the default TDP force the default clocks down (to below 800) occasionally? And if you were to up the TDP you'd get more performance?
I do think the initial "this card is very slow" reviews had to do with that, then. With newer drivers they were able to up the TDP limit and reach the true power of the card.
i will wait for the 15th so that we can figure out exactly what this whole power tune AM/FM radio switch power saving maybe turbo boost stuff is. because for AMD to have the card set to a TDP and raise or lower clocks to stay within that limit would be nothing short of pure stupidity. considering most people who buy this card have no interest in OCing and know little more then how to plug in a stick of RAM let alone do research to find out that to get the most performance out of their brand new $400+ video card they need to hit some stupid switch. I appreciate AMD trying to make the card as efficient as possible but at the cost of performance like that is way to far over the line...
Now that would be superb! :up:
Someone is gonna get banned... :hehe:
1100MHz using a waterblock? I was hoping for 1300-1400 with all those fancy turbo switches and stuff! :D
Besides, AMD promised really good OCing of these cards, since the power regulation is very similar to CPUs.
Switch = dual BIOS - mainly to fight cold bug and other things
What I think neliz is trying to say is this:
The TDP limit can be adjusted by a slider (+/- certain %)and the card adjusts core clock to keep card within the TDP limit (note: not heat, but TDP). The core clock is also the limit of how high the card will clock. That is, if you set TDP to +20% to 240W, but core clock is maintained at 800 MHz, the card will just stay flat at 800MHz at all times. However, adjust it to 900 MHz, and the card will run at 900MHz as long as it is within the TDP limit of 240W you set.
Example: With higher TDP limit, the core clock can be maintained higher - hence the chart shows that at +10% power, the core clock can be maintained at 800 MHz evenly.
To go above 800, you still have to OC the card - and you can adjust the TDP as well to maintain it if you want.
It feels like... turbo core for GPUs
Interesting stuff! It makes 2 x CaymanXT's feasible for 6990 since you can play around with power...
edit: I also think this is where the difference between 6970 and 6950 lies too. 6970 is at 880 MHz which is 10% higher clock than 6950. But its max TDP is allegedly set at 250W, or 25% more than the 200W of the 6950. So the 880 MHz clock can be maintained at a higher level more consistently than the 800MHz of the 6950.
No... the 6970 is rated at 250W max apparently. Just going off that slide, you can see that in Perlin Noise, 6950 is 650<->800MHz at stock TDP limit. Once upped 10% to 220W, it runs consistently at 800 MHz.
The 250W limit of 6970 allows it to run up much higher at reference. In other words, the 6970 should have no problem maintaining at least 800MHz at all times with a 250W limit.
No, this apparently is set by AMD Overdrive or some new software, and will apply to everythingQuote:
yes i think that it's going to throttle in such benchmarks, everything else doesn't make sense
http://img100.imageshack.us/img100/928/fot023.jpg
Here is the Nvidia dual-GPU Monster to kick AMD HD 6990 but:
http://img821.imageshack.us/img821/4...adualfermi.jpg
;)
nope it will be in everything. and that would make sense. It will give most users a lower power consumption by not using power they do not need. Which is basically fantastic. The only thing which would be better if you could use it with profiles or fps based. e.g. max 100fps and adjust power on that.
So why you're not complaining about GTX580 throttling in Furmark then? Apparently it is limiting performance in that excellent game by quite a margin.
Look at this feature from different angle -> Perlin noise from Vantage takes the HD5870 almost as high as Furmark does. This new Power Tune undoubtedly will affect games performance very slightly in very specific conditions only or not at all. I have not come across a game which would put similar load on GPU as Perlin Noise or Furmark at all (be assured I've tested quite a lot of them when my HD5870 was fresh).
The good thing about this tech is that it is microcontroller based and not dependant on profiles.
I can set my card power to be -20% and when playing older/less demanding games my card will take care of staying at reasonable power use while still delivering outstanding performance.
Imagine a situation where you have card X with sustained gaming power 200W and card Y with variable power of 100-200W. Now you start Quake III TA with no vSync so you can fragg better. Card X will draw 200W and give you 2500FPS and on card Y while might reach 2500FPS @200W you prefer to save 100W and be happy with only 1500FPS. The side effect of that power saving will of course be lower noise as well.
So instead of spinning new power features as bad thing try to look at them from different angle :)
Which makes sense again... like I said, the earlier bench results that put the card at 5870 like levels could be because of the default TDP limit...
once the limit was increased to 250W (or whatever), we got the real performance of the card which is slightly below GTX 580.
Although, secretly I am wishing that the current scores we have ARE with the default TDP limit, and if we remove the limit, GTX 580's butt will be kicked... that'd be:
1. Awesome
2. My next card
That is Galaxy's dual GTX 470.
As someone whose interest in this thread is purely for tech curiosity* and the inner-world of graphics whores -- a breed that would confound an international symposium of psychiatry -- I just love the image of you all tossing and turning through sleepless nights fretting vicariously on behalf of the Neanderthal Sixpack. Oh Noes!!11, NVIDIA have re-branded again... Oh Noes!!11 there's a switch on the card. Think of the children (sixpack), ban this sick filth now!
Fortunately windows aren't taxed these days because glass is less transparent.
Anyway, do carry on... not you specifically, the breed. /research
[*] my pc won't even run Aero
yo boy..... no no no no this is not nvidia card this is from GALAXY and this card not use GF110 core this card use GF100 (2xGTX 470)Quote:
Here is the Nvidia dual-GPU Monster to kick AMD HD 6990 but:
look here
http://wccftech.com/images/news/giga...geview.php.jpg
and look here this is from nvidia dual-GPU (GF110)
http://wccftech.com/images/news/misc/147a.jpg
Both are 2 x 8 pin so they wont go retail in that config. They're probably ES's or I guess in Galaxy's case, a "special edition" card
I'd prefer if I could make it a TDP floor. :rofl:
I doubt AMD would set the TDP throttle at a level that would hinder game performance at default clocks. That's the performance we get at 880MHz and that's it. If you want more you'll have to oc and up the throttling limit to 250W. Some people seem to have gotten the idea that upping the limit is somehow different from normal overclocking, but it isn't. Of course the benefit of having a slider that goes to 250W is that it might be higher than what AMD would otherwise set as the max TDP. Of course there's always the risk of bricking your card too. :ROTF:
not at XS but you need to think about the majority of the market here. out of most of the people I have ever met who call themselves computer enthusiasts and fully build there own systems and have a decent amount of knowledge about computers and hardware only two of them besides myself overlclock there systems CPU let alone there GPU. most of the time I end up doing all of the overclocking for them anyways because they don't have the knowledge or interest. the same applies to the majority of guys who work in the local hardware shops. most of them buy parts assemble them and call it a day. so lets say overall out of people I have met that would be in the market for a 6970 only 10% of them will overclock it (thats being very generous) and it's not like i don't have a great deal of exposure to this crowd either custom built computers are my hobby and my job so they take up a good amount of my life. theres more to the custom built and gaming seen then what is on XS however i would have a hard time calling anything but XS "enthusiasts"
@Lightman i guess it would have to do with a difference in view points. for example I could care less about using an extra 100 watts to play old games (i own a 480 after all haha) also I have noticed that when monitoring my GPU usage in games old and even some newer ones it is very rare that I am using 100% of my GPU while gaming most older games on use 30% of my GPU thus my temps are much lower then when i am at 100% and same with my power usage anyways. the issue I have is that out of the box when you need all of the possible power you GPU has (playing Crysis for example) your GPU could potentially be down clocking itself for the sake of power consumption, which begs the question why did you buy such a high end GPU in the first place when in demanding games it will slow itself down at stock settings. now granted i doubt it will be that drastic but we will never know till the 15th comes around.
as to why I have not complained about the 580/570 in furmark, i tend to think that any 3dmark test is much closer to real game play then furmark. furmark is made for three reasons, stability testing, heat testing and power consumption testing. 3dmark is at least made to provide some level of game performance representation (even though it fails to do so more often then not) i have yet to see the power throttling on the GTX 580 come into play in games because if I remember correctly it is driver based not hardware based... and for the record I am not a fan of what Nvidia did there which was deliberately to make the gains in power consumption look better then they really are. however power consumption is at the BOTTOM of my concerns when buying a new video card or when recommending a new card to someone
who knows maybe there will be no games that trip the downclocking on the 6970 but if AMD does limit game performance then I will have a problem with it.
@[XC] hipno650
I see from where you coming :up:
I could make a car analogy but I will refrain :p:
Let's wait for official benches to judge this new power saving tech.
gpu usage is a useless metric to correlate to power consumption. Far Cry 2 used to show 100% constant usage, while crysis jumped around 70-95 percent while temps playing crysis were much higher.Quote:
i guess it would have to do with a difference in view points. for example I could care less about using an extra 100 watts to play old games (i own a 480 after all haha) also I have noticed that when monitoring my GPU usage in games old and even some newer ones it is very rare that I am using 100% of my GPU while gaming most older games on use 30% of my GPU thus my temps are much lower then when i am at 100% and same with my power usage anyways.
Moving this to the extreme:
http://img.photobucket.com/albums/v2...n/amd_bear.jpg
żUK prices? (Gibbo @overclockes.co.uk forum) found on b3d forums.
6950 2048MB = £215-£230
6970 2048MB = £280 - £320
I just hope the wait does not kill all of us:D
in my experience I have seen usage while playing games to be directly related to temps experienced while playing games across a wide range of games. granted there with be differences and yes Crysis does seem to get the cards hotter and make them use power then most but usage it is still shows a certain level of GPU power consumption. a GPU running at 50% uses less power then one at 100% thats pretty common sense for the most part...
just like when I run WCG on my CPU it uses more power then when I surfing the web...
I have also noticed that most modern cards will auto downclock when the GPU gets below a certain level of usage which is fine for the most part because you don't need the extra power anyways... this however up until now has been driver based not hardware based.
So this switch is like AMD saying: 'this is what we can do if we use the same TDP as the competition'. :D
Gibbo post at OverclockersUK Forum.
Quote:
NVIDIA's & ATI's Latest Pricing as of end of this week
GTX 460 768MB = £105 - £115
5830 1024MB = £115-£130 **EOL** ***Practically 5850 performance - BARGAIN***
6850 1024MB = £130-£140
GTX 460 1024MB = £140-£160
5850 1024MB = £130-£150 **EOL** ***Stock None Existent***
6870 1024MB = £170-£190
5870 1024MB = £180-£200 **EOL** ***Nothing sub £200 beats this***
GTX 470 1280MB = £190-£220 **EOL**
6950 2048MB = £220-£230 ***Expect £10 price increase in January***
GTX 480 1536MB = £250-£270 **EOL**
GTX 570 1280MB = £250-£290
6970 2048MB = £285 - £320 ***Expect £20 price increase in January***
GTX 580 1536MB = £350-£450 **Supply & Demand will keep this high**
6990 4096MB = £450-£500
Things are starting to get interesting for sure. We're starting to see the big picture now, as well as the intentions.
If the 6950 comes in at just above £200, i'll buy it at launch :yepp:
Great pricing for 2GB cards
FWIW those would be close to 58xx launch prices it looks like
If they are anything like the 5870 launch prices, I am pretty sure I will end up with 3 6970's.
I bet one 6970 is overkill for 1920x1200. 2GB on those guys is asking for 2560x1600 or multi-monitor!
I only game at 1680x1050, so a 2GB 6950 WILL probably be overkill. But at least i can increase those AA and AF levels indefinitely :D