i have a feeling trinity will be great performer
but TDP i hear will be increased to 125w at least for the desktop side of things
i have a feeling trinity will be great performer
but TDP i hear will be increased to 125w at least for the desktop side of things
125W is fine to me, I personally won't even use Turbo, as I will clock it manually and shut it off.
The difference in operating cost isn't worth discussing for me on the desktop...the cost to run an OC'd BD may be another thing altogether, as it is a really large jump in power.
I'm hopeful Trinity will be a good chip.....she looked bangin' in those leather pants!
Seriously, I started looking at Inhell offerings, as they can be had rather cheaply, and they perform....come on AMD, please don't make me do it!
If you take a very close look at the FX Next slide that was shown some time ago(at FX launch),you can see them mentioning 8MB of L2 cache(4 modules x 2MB per module) . No mention of L3 cache at all :). So they seem to canned the L3 in Vishera and still get the good core improvement without it. That's pretty good IMO. Server variant will definitely have L3 and feature 10C/20C models,so it can only be faster than Vishera is. Vishera on the other hand,if no L3 onboard tidbit is true, may end up considerably smaller than Orochi. In Orochi,those 8MBs of L3 not only consume a lot of power,they eat up a lot of die area too. So if they canned it they may gain some on the frequency side(due to TDP headroom they would get) and they can reduce the cost of the chip (smaller die).
without l3 we get lots of modules even if it doesnt make sense :D
This shouldn't be needed.
During the brief AMD mentioned that the L3 cache was overkill for desktop and very rarely showed any real benefit. The L3 was/is there for the server SKU's and should be seen as a bonus for desktop. So don't be disappointed when Trinity arrives without L3 ;)
I think it will be a good riddance on desktop :). Give us some decent latency L2 and give us some TDP headroom from cutting L3 off :D. Trinity was said to have a bit higher IPC and more clock ,good job in my opinion.
will their be a new stepping in this generation of bulldozers? i.e. FX-8170?
maybe yes....In documentation is info about B3 rev., but nothing more.
Charlie posted a die shot for Trinity
http://semiaccurate.com/2012/01/05/e...about-trinity/
looks like the gpu portion got even bigger, and cpu even smaller. even though amd is behind intel with their mobile cpu line, their gpus just crush them, and this will do an even better job of it.
whats the chances that its GCN?
Manicdan about this much <1%:D
edit: or rather 0%, it's pretty similar to Llano's IGP at least that's what I think.
booooo!
One more interesting part is the die size which Charlie lists as ~240mm^2. Roughly the half of this is GPU portion. If FX Next is based on PD with no L3 (like leaked slide suggests),then we can expect similar die size for 4CU/8T part( 240-250mm^2). If clock and IPC actually do go up by 15-20% (combined #) and die area goes down by ~30% this would considerably up the performance/mm^2 ratio number versus today's FX (1.15*1.3=~1.5x or 50% better perf./mm^2).
Manicdan That's not necessarily bad. GCN is a really nice architecture even more so for general computing but I think GCN IGP would end up bigger than this one. Charlie said the size is ~240mm2 so just a bit bigger than Llano(~228mm2), yet IGP at least 30% better + you get VCE, eyefinity maybe UVD3 is also better who knows. Even the cpu should be faster but it depends on clocks.
Someone mentioned the shader count is only 384. It's a bit low to me, maybe the clock is higher than expected.
http://semiaccurate.com/forums/showp...57&postcount=2
undone They said ~30% above or a bit more so 384 or 6Simd VliW4 is +25%, then the rest will be clocks, if they can clock it at 660-720Mhz and still stay in a comparable TDP that's another 10-20%.
well theres a few things to think about that i realized after my post.
if its based on GCN, then how would they do xfire if the lower end 7000 gpus are based on the previous arch. which makes sense why they lower end 7000s are the previous arch, so its all compatible. and what informal said, the size of the 2 PD cores is really small, so hopefully this means we shall soon see 10 core non L3 chips with great turbo clocks and maybe some extra single threaded perf.
Yeah it could be but I meant this slide:
http://img651.imageshack.us/img651/6...tachmentdz.jpg
Only L2 is listed in the slide. Maybe they didn't mention L3 on purpose ,who knows. Anyway,15-20% over 8150 is good enough to put AMD back into the mid-high range.
http://img707.imageshack.us/img707/9711/63332322.jpg
edit: note that this slides says 10% better x86 performance. Other slides state Piledriver will have 10-15% better core performance than Bulldozer. So It's not really clear if the FX Next slide is referring just to IPC or clock+IPC (which should be very low improvemet). Another possible explanation can be that AMD knows they will launch 8170 at higher clock(around 3.8-3.9Ghz base clock) so they figured this in the PD performance estimate: PD @ 4Ghz should be ~10% faster than 8170 @ 3.8Ghz.
That slide really says 10% better x86 DIGITAL MEDIA WORKLOAD. Not 10% better x86 GENERAL WORKLOAD. I do not trust or like seeing caveats like that.
AMD disclosed the IPC improvement to THG awhile back. They stated a third of 15% improvement PD will bring over BD will be IPC(so 5%), while two thirds will be power improvement(read more clock within same TDP). So the 10% in the slide must be clock+IPC over Bulldozer "8C" at undisclosed clock (maybe 8150 or 8170,where the later one is more logical choice as PD will come right after this part).
We still may see an additional boost of 3-7% in less multithreaded apps( thread count lower than 8) , as per Microsoft. The patch will come with 8170 or a bit sooner.
It looks a little smaller, with marginally lesser performance (Hopefully PD is much improved!)
If those two pictures ARE really to scale, look at how far down the die the Llano CPU part goes vs PD.
Oh, he used the I/O pads on the left (which are not the same) for reference...
Do you really think performance in gaming scales linearly to expected performance from an early marketing slide, even if it was real?
There is a 9% clock speed difference with Turbo on and 6.5% without turbo clock between i5 2400 and 2500. The real recorded difference is 4%...
Think 20-25% for that gain you are looking for.
Of top of this, BD loses to i5 2300 by 3%, which is clocked at 2.8/3.1 vs FX-8150's 3.6/3.9/4.2.
The FX-8150's base clock is 28.5% higher than the i5 2300, and 26% higher with mid turbo vs i5 turbo. Make that 35% for max BD turbo vs i5 turbo. The i5 also will have an intermediary Turbo state(s) however that is not published by intel and the 3.1 only applies to 1 thread used.
That is, of course with about twice the real world power consumption as well compared to the lower end intel quad i5 2300.
Gaming is only one part of the workloads. I think for most games ,users will NOT play at low-res . And @ 16x10 the difference,even with 7970 will be not that big between i7/i5/BD. Somewhere you may have a bit lower fps but while all the CPUs are OCed to their maximum I think the gaming experience at resolutions 7970/78xx users will play most games, difference would be negligible or not that noticeable.
In real workloads that are not gaming, difference will indeed go by the clock and IPC uplift. So 10-15% as AMD stated will be true in some of the workloads and less true in others , depending by what part of the CPU the application is bound.