same, 5970 will suffice until we see 1st or 2nd gen 28nm, the current process is right at the top of the cost/performance curve, where more improvements become smaller and more expensive for R&D, bring on 28nm
Those CFX #'s are interesting... basically at 2560x1600, the 6970 and 580 are neck and neck when dual GPUs are used, with the 6970 slightly lower in min FPS. Antilles should be a beast, but going 2x6970 budget-wise might be worth it if you game at that resolution
It is stupid if there were no architectural changes, but that's the X-factor. If you're going to ask why the performance is near the same with the older generation, that's the only logical solution besides serious engineering failure, which I doubt here based on theoretical figures. These same arguments were thrown out there anyways when the 5870 was released and people chuckled that it didn't beat the 4870X2 or the GTX 295, and look at where it is now...
As for the 480, it hasn't gained as much as the 5870 has over its lifetime (granted, the 480 was around 6 fewer months) and the 570/580 are based on the 480, so who knows where the 570/580 will head
Last edited by zerazax; 12-17-2010 at 02:11 AM.
The 5870 was based on 4870 was based on 3870 was based on 2900XT. Yet 5870 was able to see significant performance increases. Meaning 570 and 580 can too. 6970 is more or less equal to GTX 570 and I doubt AMD is really going to "pull away" with drivers.
CPU: Intel i5 2500K + Antec Khuler 620 Memory: 4GB DDR3 Corsair DHX @ 1600MHz CL7 GPU: Nvidia GTX 560Ti + Antec Khuler 620
Motherboard: Zotac Z68ITX-A-E HDD: Crucial M4 128GB + 2TB Samsung F4EG Chassi: Lian Li Q11B PSU: Cooler Master Silent Pro 850W OS: Windows 7 x64
Welcome to my home theater! | mattBLACK Gallery | Minima "H20" Gallery
I think the reason why we won't see huge driver increases as I had said earlier was legacy driver support.
http://www.ngohq.com/news/16670-amd-...eon-cards.html
This occurred in late OCT of last year. My guess is by not taking pre 2900xt into consideration, they were able to make sacrifices on stuff that might be detrimental to 19xx and earlier series and focus entirely on making improvements for R600 and up(basically anything based on a 5 way shader architecture. Some of the biggest driver gains were made during the Radeon 5xxx years which is surprising considering the drivers should be mature at this point. I think it was 10.3 was where AMD had a huge jump.
I think we might see 5% at best, but alot of the performance is going to be held back by making driver support for anything older than 69xx. That is a 5 way shader driver is going to be considerably different than a 4 way shader driver and is going to slow down improvements for this new architecture. I suspect this is one of the reason why fermi shaders are individually weaker than the gtx 28x and downward. NV current drivers for cards based on g80 shader technology and focused on extracting as much performance out of them as possible. It will be difficult to make drivers that make fermi faster while not hurting anything before it I think.
I think if both company focused entirely on making a fermi only driver and an 69xx driver, we could see huge improvements.
Last edited by tajoh111; 12-17-2010 at 02:46 AM.
Core i7 920@ 4.66ghz(H2O)
6gb OCZ platinum
4870x2 + 4890 in Trifire
2*640 WD Blacks
750GB Seagate.
Fermi arch has been around for 7 months, GTX580 and 570 are pretty much rename material for the most part (add a couple of sps and / or cut off some VRAM, throw a TDP limiter chip on, voila).
Yea, right, 4870 learned to do DX11 all of sudden, oh wait... 5870 was a significant redesign of 4870. Go read Anand's (or any other decent) review before making absurd claims.
Last edited by zalbard; 12-17-2010 at 02:50 AM.
Yeah, 5870 is not as based on 4870 as 580 is on 480, I'll give you that. But the fact remains that it was still VLIW 5; and mostly the same with 4870 except the tessellator unit and the added SP's. The grouping of stream processors, and the processors' capabilities (and general dependence on the performance of the scheduler, I assume) hadn't changed.
5870 by itself is the fourth iteration of the same arch. Obviously it's not going to be ENTIRELY the same. But before saying "it's going to be awesome with drivers" and possibly misleading people, think of the last time AMD introduced a new arch; X2900XT. When it was released, everyone was disappointed at the performance, and a lot of people encouraged people to buy the card with "it's going to get better with drivers", whereas it didn't. It got better with iterations.
X2900XT was a new arch, as was GTX 480. Those products both sucked (2900xt sucking more, I might add). The next versions, HD3870 and GTX 580, were 10x better received by everyone, and were the products they caught up with the competition. There is a similarity.
GTX480 got a quite significant performance boost with driver updates. Hopefully AMD will follow suit.But no one can really predict the performance increases. The arch is different (which suggests it might need some optimisations), but who knows what code mess there is in Catalyst drivers, maybe they're nearly perfect already (CFX profiles do need fixing, though).
On a side note, the difference between 2900XT and 69x0 cards is that the latter are already very decent. So one can surely say there will be no similar disaster.
Cypress was terrible in terms of perf/transistor compared to RV770. AMD increased the transistor count by 125% and managed a 60% performance increase. 4870 on the other hand was made on the same node as 3870 (55nm) and increased transistor count by 43% while increasing performance around 55%. The only reason 5870 seemed half decent was due to process shrink and fermi failing. You can see this also from the efficiency increase of 6870, Cypress had way too much shaders compared to the rest of the gpu.
According to AMD they've managed to get about 15% improvement to several review games since Cypress launched. That's about as much you can expect from a year's worth of driver updates. Of course Cayman being a new arch it might gain a bit more. Does anyone have numbers on how much Fermi has improved with drivers?
Last edited by Pantsu; 12-17-2010 at 03:51 AM.
"No, you'll warrant no villain's exposition from me."
Cayman won't gain because it is not shader limited so optimizing shaders won't really do that much performance-wise. E.g. if you are able to improve the parallelism by 35 %, the actual performance gain could be around 5-10 % in most cases, if not even less. And when that number is applied to the whole scene rendered, the actual relative framerate gain would be single digit for sure. Yay for 2 % performance boost.
6970 CF review by me, here :
http://www.xtremesystems.org/forums/...d.php?t=263746
Yes, we can say the 6970 is a little dissapointing, but we also need to look at what we are comparing here.
In Norway (looking at komplett.no webshop):
- the cheapest 580 is 4039 nkr
- the cheapest 6970 is 2995 nkr
So the 580 is about 30% more expensive then the 6970. Amd is better at something and nVidia is better at something. But it is not valid to call the 6970 a dissapointment because it dont totallty stomp a card that is 30% more expensive. This is in fact a BIG price difference!
jarle
Old Comp:
Antec 182
Asus Maximus Formula (Bios 1302)
Q6600 G0 @3.2GHz @ 1.3V (can easily go higher but NB/SB heat is a problem sadly...)
Ultra 120-Extreme w/Nexus 12cm realsilent fan
Corsair Dominator TWIN2X PC8500 4GB DDR2 @ stock/400MHz strap, 1:1 with cpu
XFX Radeon 6950
1 x WD Raptor X 150GB
1 x WD Caviar SE 16 750GB
Corsair HX620
Vista Ultimate 64bit
4670k 4.6ghz 1.22v watercooled CPU/GPU - Asus Z87-A - 290 1155mhz/1250mhz - Kingston Hyper Blu 8gb -crucial 128gb ssd - EyeFunity 5040x1050 120hz - CM atcs840 - Corsair 750w -sennheiser hd600 headphones - Asus essence stx - G400 and steelseries 6v2 -windows 8 Pro 64bit Best OS used - - 9500p 3dmark11(one of the 26% that isnt confused on xtreme forums)
my opinion is that in the time it will take for the 6900s to mature due to drivers, 28nm will be right around the corner offering the same perf with half the power and a 40% drop in cost.
do i want a 6950? sure do, at 150$ and 100W please...
2500k @ 4900mhz - Asus Maxiums IV Gene Z - Swiftech Apogee LP
GTX 680 @ +170 (1267mhz) / +300 (3305mhz) - EK 680 FC EN/Acteal
Swiftech MCR320 Drive @ 1300rpms - 3x GT 1850s @ 1150rpms
XS Build Log for: My Latest Custom Case
it be interesting to see a barts chip with same SP's as the 6900 series, id bet it perform the same. amd/ati has a long way to go to optimize there drivers for games because there doesnt seem to be that much of an increase with the new vliw-4 arch.
FX-8350(1249PGT) @ 4.7ghz 1.452v, Swiftech H220x
Asus Crosshair Formula 5 Am3+ bios v1703
G.skill Trident X (2x4gb) ~1200mhz @ 10-12-12-31-46-2T @ 1.66v
MSI 7950 TwinFrozr *1100/1500* Cat.14.9
OCZ ZX 850w psu
Lian-Li Lancool K62
Samsung 830 128g
2 x 1TB Samsung SpinpointF3, 2T Samsung
Win7 Home 64bit
My Rig
Why are you comparing it to the GTX580?! The 6970 is aimed at the GTX570, both in performance as in price, as reviews and prices throughout the world have shown.
The sad and ugly truth is that AMD took ~9 months to release a card to go head to head with the GTX480/GTX570.
Last edited by RSC; 12-17-2010 at 10:49 AM.
The problem is that the gtx 570 exists. It is slightly cheaper and about the same speed or a percent or two slower. AMD brings nothing new to the table that forces NV to change their prices. Compared to earlier launches, this chip is a disappointment because AMD loses ground compared to its earlier generation. I.e they consume more power than before and NV has a part with the same speed at the same price.
Barts was a much better launch as it made NV get defensive and lower the price of their existing products.
Additionally for business reasons, the 6970 is worse too. They have a product that cost more to make(bigger die and 2x the memory and more expensive PCB and cooler) and they have to sell it for the same or less than it's earlier generation because the competition is a lot more competitive this time around. Considering how much of a disaster the original gf100 was, this should have not happened. But somehow AMD released a new top end product that is only 20% faster, which is pretty bad considering they were already slower than the competition. NV did the same, but they reduced power consumption, heat and they had the lead already for single chip performance.
If this was a good generation for AMD they would be able to charge more for cayman but they can't.
AMD must be pretty unhappy about having to sell the 6950 for 299 when they are selling barts for only 60 dollars cheaper when it has half the memory(which should already cost 60 dollars), a smaller pcb, chip and cooler.
It is weird how people are ignoring the existence of the gtx 570 and only compare the 6970's value to the gtx 580 to inflate its value. Don't get me wrong, the 6970 is a good value but there is a Nvidia card on the market offering the same type of value on the market already.
The gtx 580 is a bad value compared to the gtx 570 too. As a value proposition the gtx 580 is bad and pretty much any top product from AMD or NV have been a bad value proposition.
Core i7 920@ 4.66ghz(H2O)
6gb OCZ platinum
4870x2 + 4890 in Trifire
2*640 WD Blacks
750GB Seagate.
many of you guys forget that the gfx cards coming out end of year 2010 were supposed to not be on 40nm,as originally designed by AMD...
then TSMC said it has problems with the smaller process and AMD was able to redo partially the current release somehow.
my guess is that they will be making good money both on the 68xx and 69xx.
so in a way i guess,Barts and Cayman are plan B
---
---
"Generally speaking, CMOS power consumption is the result of charging and discharging gate capacitors. The charge required to fully charge the gate grows with the voltage; charge times frequency is current. Voltage times current is power. So, as you raise the voltage, the current consumption grows linearly, and the power consumption quadratically, at a fixed frequency. Once you reach the frequency limit of the chip without raising the voltage, further frequency increases are normally proportional to voltage. In other words, once you have to start raising the voltage, power consumption tends to rise with the cube of frequency."
+++
1st
CPU - 2600K(4.4ghz)/Mobo - AsusEvo/RAM - 8GB1866mhz/Cooler - VX/Gfx - Radeon 6950/PSU - EnermaxModu87+700W
+++
2nd
TRUltra-120Xtreme /// EnermaxModu82+(625w) /// abitIP35pro/// YorkfieldQ9650-->3906mhz(1.28V) /// 640AAKS & samsung F1 1T &samsung F1640gb&F1 RAID 1T /// 4gigs of RAM-->520mhz /// radeon 4850(700mhz)-->TRHR-03 GT
++++
3rd
Windsor4200(11x246-->2706mhz-->1.52v) : Zalman9500 : M2N32-SLI Deluxe : 2GB ddr2 SuperTalent-->451mhz : seagate 7200.10 320GB :7900GT(530/700) : Tagan530w
Would that mean, in your eyes, that Nvidia has taken...a long long time not to release a card to go up against the 5970?
or do Ugly truths, only go one way.
As I see it, AMD needed something against the 570; The 6870 does that. Until the 5xx series came along, AMD could happily keep the hounds at bay with the 5870 and the 5970. The 570/580 changed matters.
we're now back in the position we've been in before with the 48xx series of cards. Nvidia hold the top single GPU, while AMD hold the fastest card and more efficient (in terms of production) cards.
I'm sure the biggest issue was TSMC and it's inability to drop down a node, as AMD with it's smaller dies can utilise quicker.
Contrary to popular belief, TSMC didn't have issues with 32nm. It was dropped for economic reasons after AMD decided to transfer Cozumel and Kauai to 40nm. There were still plans to do Ibiza at 32nm (likely where the 1920 SP rumor came from) but those fell through when TSMC no longer saw a point in pushing a manufacturing process which very few companies would pick up on.
AMD's lineup would have looked like this on 32nm:
Ibiza
Cozumel
Kauai
Instead we are getting 4 products:
Cayman
Barts
Turks
Caicos
Basically, they are now able to better cover the market with cards using a more mature process while costs are kept to a minimum. Suits me fine.![]()
If dual GPU solutions performed on the same level as single GPU solutions and didn't suffer from micro stuttering and an extreme need for optimized drivers, nobody would buy a high end single GPU card. If the performance stability and smoothness was the same, everybody would just buy two 5770 or two GTX460 SE and call it a day.
Bookmarks