Why confused?AMD probably no longer use ATI logo on those stickers,but it doesnt mean ATI logo will suddenly disappear everywhere.
Printable View
It's not like they didn't see it coming themselves. Either they are still using them(internally or not), that pic is from before they stopped using the logo internally, it is fake or someone took the sticker from an older card and replaced it.. do I know not why.
can be old sample also
Not as old as AMDs plans to stop using the ATi brand, so it's unlikely. Just because the news about the brand change reached us only a few weeks ago it doesn't mean that AMD didn't have plans for it until a few weeks ago.
Either it's a fake, or the change only applies to external stickers on laptops and doesn't apply to stickers on actual cards this generation, or AMD is still using ATi internally.
Most likely just an internal testing sample using an old fan with old sticker on it.
Is it a pic of a SI or NI, at nordichardware and sweclockers they say that the picture is of NI. I thought that NI would come after SI...
I don't know how many samples we'll have to see before the final one...as usual
Well sweclockers also say that Orichi will have integrated graphics. :D
They also claim that AMD said costs was the main reason to why Bulldozer won't support AM3. :shakes: A while back they was the first, and only, site to report that Phenom would consist of four separate cores instead of true quadcore design. :rofl: The list goes on.
And it's of no use to point out flaws in their news, they just ignore it, no matter which channel you use to forward your report. But, if you happen to use the word "disinformation" to describe their practice you get permabanned. ;)
EDIT: I don't say I know for sure if AMD calls these cards NI or SI, I just say that you shouldn't take sweclockers word for anything. ;)
Yea I wouldn't put much faith into sweclockers, they provide useful reviews every now and then though like the extensive autumn graphics card round-up.
Caicos GPU-Z Screenshot
http://i51.tinypic.com/2zzi0xy.jpgQuote:
258,CAICOS PRO (6779),NI CAICOS
AIO:
AMD HD 6870 (CaymanXT) and HD 6300 (Caicos): Pictured!
http://gpudesign.bafree.net/amd-hd-6...cards-pictured
Red PCB on the HD6870? Maybe an early sample, or they are going back to red :)
So the die is approx. 68.5 mm˛ (9.5*7.2). Any estimates on transistor count? If the transistor density is anything like in Cypress, the transistor count would be about 440.9M ((2150 M/334 mm˛)*68.5), however compared to similar sized Cedar(59 mm˛), the transistor count would be approx. 339 M((292 M/59 mm˛)*68.5).
If the only change from Cedar is UVD 3.0 + 3D Bluray playback(Which are more or less just additional logic) then the transistor count of ~340 M seems most accurate.
I hope the 6850 still uses 2*6 pin's otherwise CF will need a new PSU.
When is DX12 suppose to come to play?
Latest article by C. Demerjian, consume with salt of course:
http://www.semiaccurate.com/2010/09/...thern-islands/
Quote:
A look at what is coming in October
by Charlie Demerjian
September 6, 2010
AMD Radeon LogoWHAT IS THE latest on DAMMIT's, soon to be just MAD again, Radeon 6000 series? The story is long and complex, with some things coming into view with a bit more clarity.
Lets start out by saying we got something very wrong at SemiAccurate, the Southern Islands name. No, SI is real, it is the 28nm version of the chip that we have been calling Northern Islands for a few months now. The dates, functions and all the rest we had correct, we just reversed the family names.
Why? Lets blame TSMC for that. From what we are told, TSMC canceled their 32nm node less than a month before AMD taped out Northern Islands, then a 32nm family. Some people at DAAMIT were MAD as heck, in a polite Canadian manner of course, but what can you do? Word has it that Nvidia was similarly affected, but they were nowhere near as close to tapeout.
That is where the confusion started. The 32nm NI parts were called Cozumel, Kauai and Ibiza,and those were unceremoniously flushed. The next stopgap parts came about at the same time we heard the name Southern Islands, and that was the week our normal crystal ball was at the shop. The backup, with a government supplied HD decoder box confused the new code name with the stopgap family.
The stopgap family had no code name at first, and then was called NI-40, then just NI once again. This happened about the same time 28nm info started floating in, and as bad luck would have it, the decoder started acting up. We got the family names wrong, but everything else right.
Internally, new code names, Cayman, Barts et al were put into place, and then eventually those re-became the NI line. Still with me? If not, it is OK, I was lost too. In any case, that is where we screwed up, and why.
Back to the NI family, what are they? Well, that part is easy enough, they are a serious re-do of the Evergreen family. The biggest change is in the shaders, they have gone from a 4 simple + 1 complex arrangement to a 4 medium complexity arrangement. This should end up no slower than the old way for simple calculations, the overwhelming majority of the workload, but also be faster for most of the complex operations.
The reason for this can be summed up by saying that the new 'medium' shaders can't do what a complex one can in the same time, but there are more of them, and they can more than make it up in number. Since a GPU is a throughput machine, not a latency bound device, you won't see the difference, it will just work a lot faster in several kinds of operations.
There will probably be a pathological case or two that will be a bit slower, so look for the attack slide decks to float as soon as samples leak. Remember the Nvidia slides from CES about how Fermi was many times faster than HD5870 on a specific section of the Heaven benchmark? Remember how well that turned out in practice, and in sales? Wait for real benchmarks, and don't worry about the desperate sputters from the big green FUD cannon.
Since the shader count is 80% of the old grouping, there is some space saved, and on top of that AMD has had a lot of time to optimize area. On the down side, each shader is marginally bigger, but the end result is a cluster of four new shaders that is smaller than the old 4+1 group, and faster too.
The uncore, or at least unshader is beefed up as well, with some very notable efficiency gains as well. The net result should be vastly better utilized shaders as well, so performance should go up incrementally there too. Throw in a few more shader groups, and you have a notable speed increase from the NI family.
The down side is size. More shaders, smaller though they may be, and a beefier front ends adds up to a larger die. If this part was on 32nm, it would be smaller and probably would have had more shaders as well, but the backport had a price. The net result is that the die of NI will grow by about 10-15%, lets call it around 380-400mm^2. Performance on the other had should grow disproportionately, with the few weak spots of the Evergreen architecture smoothed over. That is what the game is about, isn't it?
...
more on the link
Awsome...
This was pretty much what I was hoping for from AMD, not on the nvidia side though that's just going to mean higher prices and less real choice. I don't really see anything in nvidia's strategy that allows for a fight back until 28nm.
Both sides were expecting to have a chance at 32nm TSMC and GF dump it so both sides are stuck on 40nm, ATI/AMD still have more room to wriggle with their architecture.
Anyone else thinking nvidia is throwing more effort into their 28nm for a comeback?
cant wait for benchy to flow through :D
i think they will have a decent 40nm gpu sometime H1 2011. i think they might announce something at GTC which is in a few weeks. this would make sense because it complies with the rumors that nvidia is ordering less 40nm lots.
i think all of the 32nm node rumors are charlie's BS. all of that is made up so he can create elaborate stories to explain why ATi doesnt have a refresh out yet. tsmc did the same thing with 45nm->40nm and publicly announced they would do the same thing with 22nm->20nm. see a pattern? why would 32nm->28nm be any different? it seems tsmc goes straight for the half node, possibly because that is part of their technique for designing processes or perhaps their nodes mature fast enough that they might as well delay it a few months and add an optical shrink. furthermore, i dont see any technical reason that 32nm would be cancelled, especially if they can make a 28nm node.
Thinking back on how both of them have worked since the DX8 days you're probably right about 32nm, I guess it was just a dream for us that didn't jump HD5870 on launch day.
nvidia do tend to have a high end refresh between processes, but this time around I'm not sure how far they can go.
Well. If all what our Mr.Salt says is true I hope AMD has some surprises for SI as well and won't just deliver a "simple" shrink. But seeing what they did lately, I'm a tiny tad optimistic.
I don't know. Maybe Big Green should have two teams working on different chips, one chip for HPC/GPGPU and one for gaming only. Thing is we only see the gaming side of the end.
28 nm might be the node where the architecture shows it's muscles :) RV770 showed R600's architecture wasn't so bad after all (imho RV670 already gave a little hint at that), just that they wanted too much too early (big chip etc). Kind of what NVIDIA did with GF100.
LOL. C. Demerjian Again :D:ROTF::DQuote:
Originally Posted by informal View Post
Latest article by C. Demerjian, consume with salt of course:
well i see it as two options:
1. full throttle on 28nm "fermi2"[huge rewards, but risky, and almost exactly what they did on 40nm, also 28nm could be a year away!]
2. silicon spin on 40nm with solid enhancements[decent perf & manufacturing cost gains, lots of manual effort but a mature process is always a safe play]
nvidia has a lot more information to base their decisions off of than we do. i'm going for #2.
or kind of like your excellent sarcasm.:rolleyes: aside from that you misinterpreted my post. by 32nm node rumors i meant all of the godly things ATi would have done on 32nm if it werent for tsmc's incompetence. it's really poor journalism. ATi, the underdog, is going against the system which is holding them down. (i.e. nvidia and tsmc.)
tsmc doesnt have yield problems. it took them a year to ramp but after that the process was fine. the problem is they undershot capacity by a large margin which isnt as unusual as one might think. iirc in some thread i calculated defect density to be .115defects/cm2. that's pretty good.
also fermi's yields do not mean anything at all for other chips. they could have poorly designed the chip. when you have billions of transistors things get complicated. yields are no exception.
Reading other rumours I don't think the 5770 will be renamed to the 6770, I think the 6770 will have the same architectural changes the 6870 will enjoy on a smaller scale and will become the 6950 in x2 form, while the 6970x2 will be the full blown 6870 dual we'd be expecting from AMD.
I do think the 5770 might be tweaked and re-branded but as a much lower model.
Going back to this:
These would have to be price brackets, and the only way it would be worth making the 6950 sell for less than the 5970 would be if they used smaller chips to start with. A chopped chip wouldn't use enough less current for it to be worth the effort.
It's also possible to make with the switch from 5 to 4 to make a smaller part with a similar number of shaders and higher clocks that would give impressive performance on the right benchmarks. If you only picked benchmarks that were heavily biased to complex shaders you'd get the results we've seen posted, however on benchmarks with simple shader bias there would be next to no change at all.
The point that is making me think this was is the rumour that AMD is launching the 6770 first and not the 6870, not even nvidia would launch a new generation of GPU's with a re-branded unit. I guess we'll find out if AMD starts dumping the price on the 5770 or leaves the price high in order to protect stock of the silicon they plan to re-brand.
I hope they dont rebrand i mean 6990 = dual gpu and 6970 = single gpu does not compute that is if 6990 is made of 2 cayman chips bu if its made up of 2 Barts then i think its ok.
"first to market are Barts Pro and XT in october"Quote:
Originally Posted by Muropaketti.com
According to sweclockers that's correct. http://translate.google.com/translat...el-med-cypress
I wouldn't put too much faith into their source as they keep calling it Northern islands but they expect Barts to be released first that is 256bit and gonna easily beat GTX 460 at price 2000 ~ 2500 SEK which should be around 230~$250 (tax differences and price variances taken into account).
I'll just float this idea, I'm thinking if those benchmarks were real, they can only be from the Barts XT, it's the only chip that could match the GPUID profile we're seeing. If you look at the difference between the 3870 and the 4870 it would be possible for ATI to do this again. Especially if they planned to take their time with 6870.
Basically ATI isn't releasing a 'mid range' GPU their releasing a new top end and giving it a mid range name. It'll be priced as high midrange but it'll beat the 480, it'll be a huge psychological hit. Especially as nvidia's midrange chip is bigger than ATI current high end chip.
Sure it'll decimate their top end prices but they can just release the aforementioned 6950. They'll keep the 6870 name back for either a push into a big core in 2011 or 28nm depending on how nvidia react.
If it works as planned, ATI would have the DX11 market sewn up by the time nvidia can release on 28nm and still make a profit as they've already proved they don't have to drop prices in comparison to nvidia.
This is all theory, but the rumours I've read up until this point support it.
http://i52.tinypic.com/2qurntt.jpg
I'm not sure if it's real,I can't find the source of this pic.
Most likely Barts XT will be close to 5850 and offer better price/performance than GTX 460. I'd bet in that case it'll also replace 5850 quickly since they'd be too close and people would be more inclined to buy a 6770. I wouldn't be surprised if 5850 price dropped down soon.
I still find it hard to believe that whipping up a frankenstein is easier then modifying an existing generation or un-shrinking a future one.
Whatever the case is, it sounds like AMD did the exact opposite of "not have a backup plan".
If they are going to rename some old cards or change the naming scheme, I can't really support that. It makes sense the way it is now. Don't try and confuse customers AMD.
seems like a small PCB for a 6+8pin
if its really consuming 250+ watts, id expect the cooler would have to be much bigger, or they are giving you room to OC
size of the heatsink says alot more than the number of power pins
wasnt cayman pro slightly bigger than cypress XT?
and its 6+6 pin...
so im off by less than 10%
does that mean this card should be 20-30% shorter than a 5870 while consuming alot more power? seems like its gonna be super loud to me
the 5850 has great noise to perf ratio, which is what this is looking like for size, no way they can do the same thing while adding in 75+ watts
ATX standard says that each 6 pin power connector can deliver power up to 75 W(as with the PCI-Express 16-lane slot, 25 W for 1, 4, 8 lane slots), after a threshold overcurrent protection should kick in to prevent possible damage due to a fault(in case the product's resistance rises too high demanding more current).
However, I believe some manufacturers do not play by the standards and their cards actually draw more than the slot + connectors should be able to feed the card with.
Edit: Strange that I haven't ever heard of a situation where the overcurrent protection would actually kick in with highly overclocked & overvolted cards like 5970 etc.
The 5870 is not very very efficient cooling. A bit beautifull but not efficient. It draw 200w, so 230w is not much more, even shorter i don't see any problem.
Maybe it's shorter because power stage is much more efficient, and pcb is 10 layers so no need of more pcb.
If this GPU kick ass, a good $/W and $/fps ratios, why i would cry for a short pcb ? ...
the size of the PCB can tell you about how big the heatsink is. the thermal efficiency is not increasing much for gpus with a given size since they are already using heat pipes and copper fins. if this gpu is just like a 5850 (150W draw in games, and the same size) we can be pretty sure its going to be about the same noise. then look at the 5870, a little bigger, and draws 221W peak (still has only 6+6 pin) and is much louder.
so if this new card is smaller than a 5870, but draws more than a 5870, then it will be loud as hell. and when i refer to the PCB, i dont care about layers, i care about length.
Who says that it's necessary? It blows on the VRM-area. Maybe someone put their finger on it, noticed it was boiling hot and put a fan over it.
There is much more to thermal efficiency than heatpipes and copper fins. Radial fan blade orientation and bending alone will have big impact on noise levels and maximum pressure generated by the fan, fin density and thickness, exhaust holes in the backplate, vapour chamber base in the heatsink etc. They all play a role, one could argue that they play a small role. But what I've understood AMD has been using the cheapest possible cooling solution, the fan dating back to HD2900 series, no more than 4 years old and far from the most efficient blade design. It's not like they're at limits in this regard, they can improve the noise levels and cooling efficiency if they want to, but it isn't as cheap then.
basically im just saying that i BELIEVE, the 6+8pin is for OCing, not typical power draw. ive listed my reasons and i dont really care if people try to nit pick the little things when the overall estimates show a very large difference between generations
hmm if the 6870 is 6+8 pin power what will a 6970 have
unless they say the h*** with pcie specs
We're still not sure if that was a 6870, it could be a 6770, like I said earlier the very first rumours were of a 6770 that was larger than the 5870 on 40nm then a 6870 that had a lot more shaders coming a few months after. If the 6870 was coming a few days later then I'd say that picture was the higher end card without doubt, but right now I'm not sure what that card was.
Seems to be another part of the Chiphell post.
http://www.hardocp.com/images/news/1...vPQj_1_1_l.jpg
Dual DVI, HDMI and dual mini-DP
if memory speed is 1250, then that hints towards a 384-bit memory interface.
I think it has more to do with really large variations in current as opposed to exceeding the limit and passing a threshold. Individual board manufacturers set their own limits from what it would seem.
I remember an old Epox board a friend had would shut down when a heavy load was applied to an 825mhz X1800. (power supply was pcp&c, so not the cause)
I don't think that pic is a photoshop product, but i also don't think it's supposed to be a retail card edition either. A sample card for driver development & AIB partners R&D ? Cayman is supposedly taking quite a departure from previous ATi mArch, both in shader arrangement (4D) and uncore (stronger tesselation performance), so the card might be ready physically much sooner than the planned launch date, but the driver needs to be developed further more considering those significant changes.
this card is real :)Quote:
Originally Posted by mindfury View Post
http://i52.tinypic.com/2qurntt.jpg
I'm not sure if it's real,I can't find the source of this pic.
but look to the fan Over PCB!!! It seems that this card is very hot :shocked:
Could simply be very early ES cards.
Whats wrong in having fan over the top of the cards. I do that even when I am not ocing. I have put up a fan which constantly blows over top of the card. So when I push it for benching, it cools back of the GPU and back of VRMs.
And when you have two in CF, the top card tends to run hotter. Makes perfect sense to have airflow over back of the top card.
And maybe that pic deliberately has that antec fan there so that we cannot count number of memory chips :P
Or maybe the number of GPU's. :p:
Soooo.Quote:
Barts Pro = 6850
Barts XT = 6870
And Barts is supposed to be about as fast as HD5870? Great AMD. :down:
Antilles 6990
Cayman XT 6970
Cayman PRO 6950
Barts XT 6870
Barts PRO 6850
Barts = midrange
Cayman = Flagship
Antilles = dual flagship
The 6770 equivalent for the 6 series will be as fast as the 5870 from last gen. What's not to like? Should wait to see what the pricing is like first at least.Quote:
And Barts is supposed to be about as fast as HD5870? Great AMD.
if Barts XT is about as fast as Cypress XT (5870), but much smaller (lets say 260mm2) and uses less power... why not? ATi has done that in the past... remember 2900 > 3870?
If priced at GF104 level or slightly above, that would be a real killer!
But there's also Cayman (which could be 50 to 60% faster as Barts XT).
lets see...
Barts PRO - 199,- (460 1GB level)
Barts XT - 279,- (470 level)
Cayman PRO - 349,- (480 level)
Cayman XT - 499,- (+30% above 480)
Antilles - 699,-
:yepp:
Let's see, 6xxx series prices are a bit lower than 5xxxx series prices, and the naming convention is :banana::banana::banana::banana:ed up (69xx for single chip cards etc).
Yes, I thumb down AMD and for a reason. :down:
so if the 6k series prices a bit lower than the 5k series (as you have just said), it's a bad thing? :ROTF:
Exactly. When Barts XT is supposed to be 6870 and replace 5850 and be a bit lower in price, it really is a bad thing. It confuses the :banana::banana::banana::banana: out from the market.
Why can't they do it something like this (naming relative to the performance, NOT the old chip with a renaming):
HD 5770 -> HD 6650
HD 5830 -> HD 6750
HD 5870 -> HD 6770
HD 5970 -> HD 6870
Now they almost do this, but they buff the naming scheme with x1xx, resulting 6770 -> 6870 @ supposedly 6770 price.
If this all is true, it seems that they instead of bringing in actual performance, they just upp the names by 100 and keep the price giving an impression of bigger leap from old generation than it actually is. Then again, it is understandable that they're working at the limits of 40 nm and there isn't too much to improve until 28 nm.
Then again, there is no reason to obfuscate the naming convention, but according to the rumours they are doing so. Also, there is no need fo sell the parts for lower price because Nvidia can't compete even with 5xxx, let alone 6xxx. ...and AMD already sells everything TSMC can produce for them.
we don't know exactly if the mid-range will be named 6870 and 6970 will be what 5870 is now. All are rumours, so we should reserve judgement when they actually release the products.
But i don't think they will do it like this, they have no reason to do so and i think it's just confusion among the leakers, because AMD has a record of giving different info to different partners just to see who is leaking info.
Thats true though, but I don't see any reason why they would try to big improvement to perf/$ with the new generation, because Nvidia isn't competing them in that segment really. I'd guess they just improve the uarch enough (4+1->4 transition 25 % better perf/mm˛), and any performance improvement would come with a price premium. Thus said, I'd expect 28 nm transition to be much more interesting, because thats where Nvidia is aiming for too.
When the cards are that close usually you have force air in between, especially if you're running them overclocked.
Otherwise you're pulling heat off the back of the other card.
When I was benching triple 4870's I had to put a fan over the cards.
Maybe they change the naming system because customers are so stupid that when they invest 500$ to graphic card they think it will remain the fastest and best card for rest of times as long as there are many number nines in the name. For example Radeon 9990 must last for ages with high pricetag!? :cool:
I prefer the old naming system too but it propably wasnt the best for marketing.
seems, 6870=bart xt=5850
release 25okt.
according to nordichardware.com
cayman is even more powerful.
that wouldnt make sense if 6870 has as much power as a 5850
i agree with whoever said it a few pages back, the x770 of the new, is the x870 of the old, and every other card scales from there. which gives them plenty of numbers to use for newer stuff that has no contender. the only real question is how much more power should there be going from 700s to 800s in the same generation, for the 4000s it wasnt 2x, but in the 5000s it was, in the 6000s it probably wont be 2x either
The leaked benchmarks have been very specific, heavy complex shader load and heavy tessellation, the predicted changes directly affect these loads. Which means there will be other benchmarks where these changes do next to nothing and the percentage gain will be single digits.
If AMD have made a HD6770 part with a 256 bit bus it means at the very least it's powerful enough to need that memory bandwidth. Also you need a certain die size before you can actually go with a 256 bus I'm not sure but a 256 bit would need more than the 166mm sq offered by HD5770.
Try to think about it in the same scale as the 4870 > 5770 true the 5770 isn't faster but it is comparable. I imagine AMD are just sticking with their plan like Intel are sticking with their tick/tock. It doesn't matter where nvidia are in comparison, you can't plan 12 months in advance thinking 'good enough', because like AMD are seeing right now on the CPU front you have to release with intent to kill every time.
5770 and 4870 were manufactured with different node sizes. so I'm not sure what comparison you're trying to make there.
but 5870 and 6770 will be the same node size.
so we want to see what amd can do with a different but evolved architecture on the same manufacturing process. thus we compare 4770 with 5770, which are also the same node size and one generation apart:
http://static.techspot.com/articles-.../Crysis_01.png
http://media.bestofmicro.com/3/N/226...%204x%20AA.png
4770 is about 800M transistors and 137 mm^2.
5770 is about 1000M transistors and 166mm^2.
So consider the transistor difference and the game performance increase. 25% more transistors and 10-20% more game performance. So AMD managed to squeeze very little if any extra performance out with the new architecture.
Now, consider that 4770 didn't have the logic to do DX11 and eyefinity and a few other discrete features that are independent of performance. Let's be generous and say it took about 200M transistors to add those features. In that case, 5770 would be the same size as 4770, but perform 10-20% better, in which case AMD certainly would have improved performance by optimizing the architecture.
If you consider these two scenarios to be constraints on what 6770 will be, we can predict 6770 is anywhere from 0%-20% faster than 5770. I think 10% average is a good expectation to have. I would NOT trust any rumors that indicate otherwise.