Its all rumours. And AMD dia showing market segments gives impression that it stays as 57x0.
Printable View
ahh ok .. thanks mechanical man
Well, yeah, AMD did reshuffle their naming convention with NI gen cards. I guess since they're now back into the highend performing chip game again with Cayman (for 40 nm process node), and the advent of Fusion based products, the old naming convention is just not adequate for its purpose & AMD's interest.
That might not be in the best interest of consumers, but far far less malicious compared to rebranding a G92 65 nm chip with exact specs from 8800 GT to 9800 GT, from 8800 GS to 9600 GSO (early version), or a G92 classified into new gen GTX 280M for mobile segment.
Yep. I expect members of this respected site could atleast smart enough to use the web & check their facts before making certain claims & assertions, but my expectation seems rather too optimist. :down:
Who's definition ? YOURS ? :clap:
Well, G80 and G71 were both built using TSMC 90 nm process, clearly going by your standard, those two are the same generation, right ? :ROTF:
Or from the red corner, R300 and R200 both disagree with your notion. :ROTF:
Well I have a big hope in GTX 580 ;
- Higher TMU and ROP numbers, even if stay with 480SP is okay for me
- NO ECC
- NO Dual Precission
- NO Bigger L2 Cache
Nvidia in the last 3 years is very very focus on GPGPU and Parallel Computing, it's time to back for pc gaming. They should make DUAL architecture now, first for gpgpu and paralell computing and the second for gaming, i know it will make the cost of research will be higher but Nv must do this because in gaming is very different segment and almost gamers never touch CUDA and Paralell Computing.
1. Higher TMU, seems like it, 128 TMUs with fully functional 512 SPs on a revised GF 100 or 96 TMUs on enlarged GF 104 with 576 SPs (fully operational or partly disabled), my prediction.
2. Yeah, you might get it
3. Unlikely, reduced, then we're on the same page (ala 1/8 hardwired DP rate in GF 104)
4. nVidia might push for more utilization of this advantage against competitor, it can be optimised for gaming use AFAIK
Agreed. Perhaps shrinking GF 100 with GPGPU enhancement and/or added units for GPGPU purpose, and from the ground design for high efficiency, performance GPU with less than 400 mm^2 size, both using 28 nm process in late 2011/early 2012.
A shrink is the least you need for calling it a new generation, by any definition. I can tell you about more "fake" generations, but those are usually for fooling people like you to buy the old-gen with the new-name. It doesn't mean they have been right at fooling you to believe it was new gen.
How would you define it? Just change the number 58xx to 68xx based on the same 40nm, with some tweaks, and call it new generational?
First, admit that your over simplistic definition of generational change in graphic mArch is, well, WRONG. Then we can move on to ...
Honestly, i can't really describe an exact definition of generational change in GPU mArch, but it's more like a thing that you can recognize it, but quite debatable in defining it. Usually, i just look for clues such as:
1. Serious architectural change like G71 to G80 (unified shader), NV15 to NV 20 (programmable shader), RV770 to RV870 (propietary tesselator to DX 11 tesselator)
2. The change in DX support out of a GPU mArch, may it be small (DX 10 in HD 2900 to DX 10.1 in HD 3870, DX 8.0 in GeForce 3 Ti vs DX 8.1 in GeForce 4 Ti 4xxx) or big (DX 10.1 in HD 4870 to DX 11 in HD 5870)
3. The improvement in mArch efficiency within the same mArch (Cypress to Bart, NV30 to NV35, R520 to R580, G70 to G71)
After that, it becomes debate about semantics. Like your proposed "shrink as a clear sign of generational change", well, G92 @65 nm vs G92b @55 nm, GT200 @65 nm vs GT200b @55 nm, outside added clockspeed and/or make it cheaper to manufacture, what else those changes create ?
Well, if you don't have a definition, then you better find one before dismissing the other one.
The list you have provided is suggesting that almost all re-brand, re-fresh, and "ovulations" should be considered as a new generation. That's a mess, you need to have a clear definition, otherwise they are going to fool you to by the same "old" 40nm GPUs (with some improvements/evolution that nobody is denying) as a new generation.
A "rebrand" should never be considered a new architectural design. Nvidia's G92 architecture was a complete joke. I mean, it was good, but I can't believe they made it last over 3... yes. count it... 3 generations (8800 GT, 9800 series, and GTS 250) Each time, they renamed and re-launched it. What a joke.
In this case, AMD is using the same micro architecture for the 6800 series, but it has been significantly reworked. The chips are smaller, the power consumption is down, and the performance is up compared to the 57xx series. In this case, it's a new chip, not just a rebranded Cypress or Juniper chip. So, in this case, it's obvious that AMD is tweaking their design from year to year, even if it's not into a totally new generation. Nvidia is famous for doing the same thing, only Nvidia rebrands entire cards, not just the architecture. That's a pretty significant difference.
OK, i dismiss your notion of generational change because it's oversimplistic & can be easily rebutted, do you honestly think that G71 to G80 wasn't a generational change while they were still made on the same process node ?
Then, i agree, defining a generational change is rather a mess, perhaps if we divide it into a few cathegories of change degree, it would get somewhat easier to understand.
1. Clear generational change, G71 to G80, NV15 to NV20, R200 to R300.
2. Efficiency improvement or DX support upgrade, R520 to R580, Cypress to Bart, NV30 to NV35, R600 to RV670
3. Dumb shrink, G92 to G92b, GT200 to GT200b, R420 to R430
4. the lowest of low, simple rebranding ala 8800 GT to 9800 GT, 8800 GS to 9600 GSO.
Back to topic, i think there are a few proposals for GTX 580 speculation, but i also think this most probably can be regarded as efficiency improvement, though some pessimists suspect the card would only use a respin GF100 chip (better yield, fully fledged, higher clocking with better thermal & power consumtion characteristic).
You are right about the Nvidia's G92-mess, but AMD needs a 22nm to call these cards as new generation too.
I'm sure both AMD and nVidia are improving the PPP (Performance, price, power usage) in this round, but that doesn't men a new generation by itself.
We have to be aware of what "new generation" means, otherwise these guys (both of them) will try to repeat the same G92-mess in this round.
No company needs a new node to call ARCHITECTURAL change.It isnt NODE change, you can have NEW node and SAME architecture.And SAME node and NEW architecture.Quote:
You are right about the Nvidia's G92-mess, but AMD needs a 22nm to call these cards as new generation too.
Thats absurd ,and so wrong on so many levels.
Move from Pentium IV cedar mill to core 2 duo isnt an architectural change to you ?
Or from Athlon to Phenom ?
ps.Its gonna be 28nm, not 22nm.
We need a good definition on "new generation". The new number-game (68xx, 69xx, GTX 560, 580 etc ..) doesn't make any sense for cards that are based on the same "old" 40nm node. It's making a lot confusions already, right at start. Just look at 68xx-mess, people are really confused how and why it's under-preforming compared to 58xx, then you get some idea why wee need a shrink to change generation-numbers (first digit).
architecture-change is tricky term, what do you mean by that? Please elaborate, would it cover any improvement on performance/power usage, or where does the line should go for changing the first digits of the GPU-number?
You didnt reply me directly, BUT, taking from this.Quote:
I've said already, a shrink is the least you need to call it new generation (and change the first digit in number-game).
The move from Pentium 4 Cedar mill x86 CPU ,based on 65nm process, to a Core 2 duo x86 CPU, based on the same 65nm process, in your EXPERT opinion doesnt qualify as a architectural change.
Mind blowing man.
You are taking about CPU, that's much more complex arena, and other world. Did you miss we are taking about GPU?
Tell me where the line goes for a architectural changes to call GPU new gen? Better performance and power usage would justify it? How about all those re.brand, and re-freshes, would you call them a new gen ( and they should get a new first digit)? Where does the architectural generation-line go for a GPU?
Who told you I'm an expert? I'm just trying to express my ideas.
Well man, a chip is a chip, they both are very similar, just engineered with different usage models.Nowadays especially, when a gpu is fully programmable.Quote:
You are taking about CPU, that's much more complex arena, and other world. Did you miss we are taking about GPU?
Sometimes you just have to understand and admit when youre wrong.
Architecture of the chip is its inner structure and workings, simd/int/fp units, caches, buses, memory controllers, its in essence a blueprint for a chip.
Node shrink is just that, you make transistors smaller.(in oversimplification of course)
For example, a move from 48xx based GPUs to 4770 was a node shrink with cutting out a prtion of the shader blocks, no change in the way it works, just smaller transistors and cutting.
Architectural change is when you change and add features/components in a noticable way.For example, Athlon to phenom move, you add L3, greatly change memory controller, improve on features ,in the end you have a product similar to its predecessor on basic level but DIFFERENT architecturally.
Another example is 5xxx cards to 68xx cards, you have same basic architecture, yet many things have been reworked, optimized, and changed (for example rasterization units and tesselator), no shrink, sizeable difference.
Cayman most probably with still same node, will have GREATLY changed architecture.
Just try to understand that architecture doesnt have to have anything with node shrink.You could built nehalem or fermi even on 130nm process, it would be MASSIVE but it would be still the same architecture.
Anything new.. that is improved, or refined.. is a generation. The 6000 series is ATi's (AMD's) 2nd generation dx11 cards, that offer moAr than the last generation.
Secondly, if it has architectural changes, enhancements, refinements & new abilities.. then.. it's a new architecture by virtue of reality.
Your pseudo-semantical argument is just rhetoric for your trolling, and embarrassing for you.
Your ideas on a definition you made up yourself. Worse still, the definition is wrong.
Wiki has a nice definition we can use as a reference:
linkQuote:
A generation can refer to stages of successive improvement in the development of a technology...
As many have shown, there are improvements between any number of successive generations despite staying on the same node. I suggest you reread a few of the previous posts and learn something, I know I have. :up:
Arrgh... Sam Oslo sounds like a broken record. Is there any way to put him to ignore list or something, because he isn't contributing anything useful information any of these GFX threads? Endless debate about new node being new architecture, yada yada yada...
Can you PLEASE educate yourself by browsing the web & look for info regarding nVidia G71 and nVidia G80, both built using THE SAME TSMC 90 nm process, or ATi R200 and ATi R300, both built using THE SAME TSMC 150 nm process ??? :shakes:
If you don't know jack about GPU mArch and won't learn even one bit, please stay away from a discussion regarding a more sophisticated topic like GPU generational change, your stubborness & ignorance is just so annoying.:down:
Echoing your frustatic scream from a tropical archipelagic nation down under, aarrrggghh .... :ROTF: