Source: Engadget via Chiphell
http://www.blogcdn.com/www.engadget....b24edfhhbn.jpg
Printable View
Repost
With TDP and Max Res this time
http://tof.canardpc.com/preview2/839...1fecce67fd.jpg
Ooooohhh - me likey! I was going to replace my HD4850 with a HD5770, but if these are likely to come out in Q1/Q2 of 2011, i'll hold on that little bit longer and get one of those instead :)
I want 68x0 numbers :D
Due October 25th apparently
It's looking good indeed but Im not buying anything until Bulldozer/SB 2011 and full Northern Islands.
A $220 6770 would be fantastic. Unfortunately it probably won't be that cheap outright. It seems we will get better value in the mid-range segment from AMD this round so HOORAY!
So where does twice the horsepower come from ? I don't see it in the specs... someone educate me please.
I'll get a 6770 on launch or near to - i cant afford more than about £175.
Barts XT = 6870
Barts PRO = 6850
Barts retail model is indeed a HD6800
Quote:
PRO is the HD6850, XT is the HD6870, I had the second edition of the Roadmap that the basic is the ultimate edition. More
More ROPs translates to more peformance... Lovely. But HD68xx. :(
Hmmm, finally looks like I may be replacing my 4850 with a 6750, curious to see the performance numbers now...
agreed, im still not very happy with either xfire or sli. you quickly loose perf per $ and perf per watt, and then you have the bugs that come with it.
i just found out my 4850xfire was off for a few months now because i had catalyst AI as off. (i have the right to be mad at CCC for not telling me in the diagnostics that i should have turned that option ON for things to work right)
im sticking with microATX and a single card for a while, until things like shared memory comes out, i think duel gpu is just too much headache for the gains.
Nice... so about 15-20% higher perf/watt. Pretty good considering that it's on the same 40 nm process. :up:
lol yeah ^^
Tripleposted x.x.
Maybe the forum should install an automerger?
In fps numbers, yes.
In actual fluidity, no.
Games are jittery with multi gpu cards.
So that extra 40fps or whatever you gain with a second card has the visual effect of gimping you to the smoothness of 15fps on a single card.
Go read:
http://www.xtremesystems.org/forums/...d.php?t=258433
25fps single card is better than 50 on sli
http://img227.imageshack.us/img227/6...livssingle.png
Surprised that no one has specifically pointed out the confirmed change from a 5D shader to a 4D shader set-up.
Im afraid thats not always the case. To many people, SLI/CF has certainly helped. It basically enabled them to game smoothly at higher settings. Was it an illusion? Nope. Microstuttering, occasional lags etc are all points against Multi GPU, but the sheer performance gains in a decently optimised game more than makes up for it, atleast to many people that I know who use them.
To each, his own. If you think Single GPU > Multi GPU, you are entitled to your opinion but I dont necessarily agree to that statement fully
Moving on from the OT, Barts looks good! I also hear it overclocking like mad, should be interesting to play with atleast
What games do you play?
Microstuttering is less pronounced with higher performance.
In games which need multi gpu setups for maximum settings, stuttering is obvious.
There is plenty of evidence in the thread linked.
Hearsay doesn't match evidence in debate.
Have some video links to 5970 ms.
http://www.youtube.com/watch?v=StgYSLr0XiY
TAKE A TEST.
Can you see stuttering in this video?
http://www.youtube.com/watch?v=emG7ZNIsxw8
It's clearly there, very easy to spot.
Lowest playback resolution to avoid playback stutter.
Let me know if you see the stuttering because I believe you do have it but are just ignorant.
first video isn't microstuttering; it's frame buffer or load, when i had a 3870CF setup i never noticed microstuttering but i noticed gains in fluidity when switching to a 4870 which was just as fast in properly optimized titles and faster than the CF in unoptimised titles...
however CF does give you noticeable performance gains (just take a look at the frametimes you postet; even the slowest frametimes of the sli setup is better than the basline; average fps suggest that you get a 90-100% gain but using frametimes you get a worst case increase of around 50-70% which is very noticeable in alot of games; i say that a 30fps single setup feels better than a 35-40fps CF setup in some games which suffer from MS (not all games create microstutter); but when properly optimized a CF setup is 70-100% faster than a single setup so you won't notice the uneven frametimes...
i didnt want to start up a war over multi gpus and stuttering
what i have experienced in many games is that if i leave FPS at 60 vsynced, i get some actual "jumping" a short stutter about once every 2-3 seconds. so i set the framerate limit to 55fps, and its 100% gone. i do notice that multi gpu is able to let me set things much higher, but the headache per game still can suck, and if i have to keep adding in console commands (if they even have one) surely makes it to much work to get that extra perf. i would rather pay 2x and get 1.5x of a single card, than get 1.8x theoretical with duel card.
btw the screenshot Jowy showed, the frame rate is about 40-45 on the duel gpu, even though the average would apear to be ~55 to fraps, while single is 30.
what free program is great for checking per frame delay times? id like to do some testing with my setup
single gpu rocks... no it doesnt .. yes it does ... no it doesnt ... LOLL
The actual FPS don't matter, when there is micro-stutter..
To me, I don't care what my frames are, if they have micro-stutter, then it's pointless. Again, 100FPS with stutter is meaningless, because they are not enjoyable.
I think Jowy's criteria is the same.
single gpu less hassle and with overclocking the high end 6000 serie should be enough for most uses.
even with eyefinity.
If they added enough performance gain then a single card should be equal to a 5970+ with overclocking.
well find out soon.
poverty's a :banana::banana::banana::banana::banana:
gotta love disclaimers...
Barts xt = 6870... haha. It's going to be slower than a 5870... lame marketing...
The thread you gave is pure win (I read it before you posted). Anyway, there's a microstutter index threshold (it's in the thread somewhere. Above 20% is bad IIRC. Then again, the threshold might vary with people) before the motion becomes non-fluid aka microstutters.
Well you just looked at them.
Barts is being released on Oct 18th with massive availability.
The chart is not completely real...
He is right though, well about the naming.
I'd be willing to bet that barts pro/xt will be 6750/6770. There isn't any reason for AMD to change their nomenclature for this generation.
Hemlock 59*0 (New this generation)
Cypress 58*0
Juniper 57*0
Antilles 69*0
Cayman 68*0
Barts 67*0
Unless they're are planning on releasing a :banana::banana::banana::banana: load of different variants, I'd sincerely hope that they'd stick to the scheme they just finished establishing this cycle.
The 2gb of ram, budget videocard and asrock MB don't really inspire confidence.
If AMD turns barts into a 6870 parts, those who criticize NV for renaming practises better drop the hammer down, because this is just as bad if not worse a move because you may be getting worse performance. Compounded would be the that this move would be entire be fueled by increasing the margins rather than survival.
Its not certain, but the increase in card size along with some people saying it is true, it could really come as reality.
I for one will not try to be uptidy and claim I won't buy AMD anymore if it does turn out to be the case.
To me, those specs in the OP really dont look like *7** range cards at all. Even if they are, they are going to cost the same as the 5850 currently does, no way is that 'Barts XP' card going to be launched at the <£150 price point where the *7** range is supposed to be.
And more lulz over the Microstutter BS in this thread. I've been through crsossfire 3850s, 4850s, 4870s, 5770s, and SLI 6800s, and GTX 460s, and never seen any kind of microstutter in any game.
The vast majority of the people who complain about microstutter havnt even used multi GPU setups themselves, they pull out info about microstutter based on what they read around the net with no personal experience of using crossfire or SLI.
I would NEVER go back to using a single hot and noisy card over two cheaper mid range ones that perform better with LESS heat and noise (two GTX 460s = FAR FAR less heat + noise than a single GTX 480, anyone that believes otherwise is completely clueless).
a lot of people here are pretty sure of things they dont know.
what I do know is that the rumor about barts being 6800 and 5770 being renamed to 6770 was very creative if false...
A lot of folks hung up on the naming. Why? Maybe they want to keep the x7xx naming scheme for the 128 bit parts. Also why would one be mad if hes getting the 58xx performance in the 68xx at a lower price.
u guys missed the best point, the 5870 is 320x5 so 320 physical shaders and 5 scalier units per physical shader and the other cards from the hd5 all also have 5 scalier units per physical shader, but this lists the 6770 to have 320x4, we know as of now that the shaders are not efficient since they have to many scalier units per physical shader so it will be interesting to see how this works for clock speed and performance.
I totally agree with you on this concerning air cooling, for air cooling the GTX 480 is a fail but for water cooling users the GTX 480 is cheaper than 2 GTX 460 and some if not most 2D and or 3D programs does not use the SLI capability and also half of the memory is lost with SLI. So for water cooling users a 480 GTX would be the best choice yet.
http://www.techpowerup.com/reviews/N...60_SLI/25.html
SPARKLE GTX 480 1536MB = $439 http://www.newegg.com/Product/Produc...-109-_-Product
MSI N460GTX 1024MB $219 http://www.newegg.com/Product/Produc...82E16814127510
The GTX 480 single can be used at full for any application also has 512MB more video memory but GTX 460 SLI gives to games and applications when it works 13% more performance but you pay 40% more for another waterblock.
In the end they have the same outcome, is a matter of preference and for what will be used. Well like the marketing BS 'Fermi done right', Nvidia will get your money both ways and they know its a draw.
on topic: The specifications of the ATI 6800 series look good, prices are needed.
I am just wondering. If they removed 1 shader out of a 5 shader unit to make it a 4 shader unit. How is their a 25 percent area savings as rumored? Assuming they removed one of the simpler shaders not capable of double precision hence the efficiency not dropping that much, how is their a 25 percent savings since these simple shaders are thought to be rather small compared to the more complex shader 25 percent sounds a tad high.
Specs are interesting if true. I wonder what else has changed besides the shader/rop/tmu layout. Did it get other NI features also? Is it an un-shrunk NI or the rumored frankenstein? I guess we don't have to wait long to find out now.
If they are indeed calling a Barts part 6870 I can't support that. I can understand, from a business perspective, why they might want to fill out the 69XX range instead of jumping from 6870 to 6970. But IMO it doesn't serve to clarify anything and will only end up misleading the typical uninformed customer. Their top single chip card has been X870 for a while now and should stay that way, IMO.
Going off topic here, but... Don't judge a book by it's cover. My uncle has a multi million Rand house (converted to dollars, a few hundred thousand) yet he drives a Toyota Avanza and his PC is an AthlonXP 3000+ with 256MB RAM. Why? Because he doesn't have a need for an expensive car. Because his PC is more than fast enough for his needs. Not EVERYBODY's priority lies in having the most expensive hardware, and even though this is XtremeSystems I can guarantee you that a lot of members with low spec PCs can afford something better but choose not to;)
This thread has all the trademarks of a near-release GPU thread
Renaming is not acceptable for whatever reason.
hmmmm why is the 6770 running the mem at only 1050?
less tmus, meh :/
awesome tdps!
if this is priced anywhere near 5700 cards this will be very tempting!
gtx460 what? :D
EDIT: wait what? barts is 6800? 0_o
What? The new 4D system allows ~98.5 % of the performance at ~75 % diespace. So they can put 25 % more shaders to the same die area, meaning more theoretical number crunching capability per mm².
What I suspect is that AMD isn't really going to just put 25 % more shaders to the core, instead focus on ROPs, TMUs and memory bandwidth because there are bigger bottlenecks there than in raw number crunching.
maybe...barts is being called HD6800 because of its tesselation performance?
I mean, barts is slightly slower than cypress, but with tesselation it's (supposedly) much faster, so AMD decided to call it HD6800 and not HD6700?
LOL how can people know how fast or how slow it is without benchmarks ... LOLL
you keep pushing settings till crap starts to get laggy, ive done that on dozens of games without ever running fraps
i probably spend an hour just playing with graphics to see what all the options do. which ones provide ME with the best eye candy i want.
boob bouncing needs to be an option in every game:D
so your crystal ball told you so ??? cool .. i want one because obviously someone left me out of the loop on this one ...
the same thing happened for fermi ... it was suposed to be here so early be a massive super uber monster and the hd5K series was suposed to be some cheapo stuff ... yet its not the same thing that happened ... so we should all wait to see if the rebrand rumors are true or false etc.... anyway ... its not like its another 6 months of waiting .
Looks like some new cards are headed our way, but I have one major question? Where the hell are the fancy new DX11 games to take advantage of the added horsepower? What's the purpose of the new hardware, unless you game at really high resolutions?
Looks like a bunch of let's change for the sake of progress. I know we need progress, but the industry needs some real change. In my opinion, better software > faster hardware.
i dont get how u would drop 25% of the shader room by dropping a small amount of cashe and a little smu, the master shader is the largest part then there is a small cashe for each scalier unit, so at most it should give u 20% of the die back by changing from 5 to 4 but its should be alot less than that (i would guess like 10%). the main advantage of it would be to make it more dynamic as u have to send all of the commands at once to the master and the scalier units so u should be able to get more efficiency by having more masters per scalier unit, but then overall u would have more die space per shader unit.
on the rops, sure more would be nice, maybe there will be a higher end, but for memory we are not bandwidth limited ATM and clocking the ram dose not yield much gain if at all, and even underclocking ram u dont change performance much
im wondering if they are pulling a marketing stunt, saying its faster, but ONLY in dx11 games
you can thank microsoft and sony (mostly microsoft) with their current dx9 generation hardware in their consoles for that. most game companies write the game for the console and port it over to the computer and since the consoles are running on dx9 hardware they are not going to waste their time coding for dx9 and dx11. its sad really, microsoft recently just commented that they are going to start paying attention to their pc segment again whatever that means. hey microsoft, wanna help the pc which you seem to care about again? release a new console with dx11 hardware and not $150 attachment to make the current xbox a glorified wii!!! sorry, i had to rant
pc games dont make enough money for big companies to make fancy dx11 games.
so what video card companies have done to keep business growing is encourage features like 3d and multiple displays. both technologies require a jump in graphics hardware just like a fancy dx11 game. 3d cuts fps in 1/2, 3 screens cuts it in 1/3. so we demand new and faster video cards to make these new features work better even though we didnt even want them few years ago
but this doesnt make sense...
if rv940 is 6800 then what is cayman?
if rv940 is 6800 then what is 6700?
if rv940 is slightly slower than rv870 how is it going to replace it?
nvidia will launch a faster part, ati will launch a slightly slower part?
that doesnt make any sense... where does this rv940 aka bart = 6800 rumor come from anyways? :eh:
Correction, games that have been marketed do sell as much as the marketing is or was hyped. Reason some companies spent more marketing than creating it.
There are many amazing games that people have never heard about which were a sale failure due to a weak promotion.
Bottom line: The marketing BS nowadays wins up most customers and that is a fact.
This is a huge subject that I do not want to discuss any further.
yeah but if rv940 is 6800, then rv970 is what, 6900? and then dual rv970 is... what? :D
faster than rv840, but not as fast as rv870...
either memory chips that clock slightly lower are cheaper, or they are doing it on purpose... maybe it would be too close to rv970 otherwise... or maybe they are reserving some space for a refresh in case 28nm has problems like 40nm?
It still doesn't make sense from the consumer PoV though. We'll have to wait a couple days more I guess, there are leaks everyday now, so the speculation time might be over soon.
Is NDA for these cards still the same ?
Who knows, NDA ends when release date arrives. I think 12th-14th was suggested as a press event under NDA, and then give it week or two for reviews so by the end of the month is most likely.
Nice mess of rumors going around. I still stick to my 40% performance improvement estimate, regardless of any naming shennanigans.
7 figures in your bank? really?
http://2.bp.blogspot.com/_2kjisMm3M9...k_robber_7.jpg
x
7
=
7 figures in your bank
back on topic :
Looking at the numbers the 6770 should beat my 5850 using less power...wonder what the temperatures will be like.
Hey, I don't know about you, but I think a spare 20k would be nice either way. Of course, I am a working college student with only 2k to my name. So maybe my view is different than yours.
Either way, these cards are pretty exciting to me for the sole fact that this will give friends a good priced path for tons of gpu power.
Lol I got 7 figures too if you count the pennies (ie. xx,000.00).