I get the feeling there's going to be more to all of this than just great fps. Although I don't want to step too far out on the plank!
http://rage3d.com/board/showthread.php?t=33925670
Quote:
Originally Posted by lupine
Printable View
I get the feeling there's going to be more to all of this than just great fps. Although I don't want to step too far out on the plank!
http://rage3d.com/board/showthread.php?t=33925670
Quote:
Originally Posted by lupine
Closer than 10% will be total furor for the twice cheaper 4870, because it will stay between GTX 260 and 280, which would be enough to collect almost all the sells in this high-end segment. We can only pray. :) I'm really astonished at ati's price policy, but I like it:).
Saw this while browsing the [H] forums - appears that the 4870 will not be available till July 8 and is aimed at the 9800gtx while the r700 is taking on the new gt200 series - and is due out in about 8 weeks (August).....also has some screenshots of the new Cinema 2.0 Ruby demo....
http://www.hardforum.com/showthread.php?t=1316251
what a shame, two months is way far, by that time good drivers for new nvidia cards be out, prices will be lower than rigth now.
http://www.amd.com/us-en/assets/cont...D_Ruby_S04.swf
Here is the movie of Cinema 2.0 demo WITH Ruby.
"ATI icon “Ruby” stars in the first-ever Cinema 2.0 experience.
Rendered in real-time and interactive, this is a brief video from the first Cinema 2.0 demo, premiered by AMD in San Francisco on June 16, 2008. The interactive demo was rendered by a single PC equipped with two "RV770" codenamed graphics cards powered by an AMD Phenom™ X4 9850 Processor and AMD 790FX Chipset. The full demo shows cinema-quality digital images rendered in real-time with interactivity. Check back later this summer for a video of the full Ruby Cinema 2.0 demo.
(0:11) "
hmmmm I don't like the sound of this. Soft launches, high end in "8 weeks" etc. Come on AMD, launch on time and no paper launches.
If it's something other than X-Fire on a card it's worth the wait cos it will prolly be a gamechanger.
I like paper launches as long as the date of availability is accurate. I hate all this Nvidia cloak and dagger NDA stuff with everything about the card being secret until a day before store availability. It's ridiculous. And it makes it hard to plan a system in advance. Do you think I would be sitting with a bloody 8800 GTS that I bought in late march if I knew that Nvidia was releasing a totally new architecture in June?! It seems like a cheap/dirty way to scam us into buying cards that we wouldn't otherwise buy. On the other hand you have Intel which announces a new architecture 6-12 months ahead of time. I am already planning on my Nehalem upgrade in Jan-Feb 2009 because I can. With Nvidia we could never make plans like that. All we have are rumours. It's true that paper launches (like Seagate's 1TB drive announcement) with no concrete time frame of actual availability is extremely annoying, but I don't think complete ignorance is a very good option either. Maybe they should pretend to be consumers for a second and ask themselves what they would like.Quote:
launch on time and no paper launches.
If HD 4870 it's in competition with 9800 GTX - where is HD 5870, the competition for next generations of cards from nVidia. 8800GT was released in December 2007 - after 7 months they bring a competition product, in conclusion - the next generation should be available after 7 months from nVidia's release of 260/280 GTX ...
I'm sick of this cat and mouse game: nVidia turned up to be Jerry the small and smart mouse(specifications based) wile ATi got the role of Tom the big and dumb cat. This conclusion is based on past and present facts...not future wannabes....:shakes:
The 4850 is competition for the 9800GTX and it's priced at $200
From what I've been hearing, 4870 is creeping closer to the GTX260
I don't understand where you're getting the rest of your rant from. Considering how far behind they were a year ago at this point, the fact that people are saying these $199 and $299 cards are going to be performing well is going to be good
http://www.overclockers.ru/images/ne...7/rv770_02.jpg
Now who's big dumb cat on that pic?
They left off every possible other spec which is *not* in their favor.
I find it funny how many people complain about efficiency. Do you sit there and have a relative measure watts drawn while you play your game too?
"OMG TURN YOUR GUY AND LOOK AT A WALL, QUICK, YOU'RE DRAWING TOO MUCH POWER. I DON'T CARE IF THIS IS A BOSS FIGHT LOOK AT THE F'ING WALL."
What happened to the days when gaming was about the experience you get?
When the GT200 is MORE THAN TWICE AS LARGE as the RV770, I'm certain you're just... wrong.
If AMD wanted to slaughter nVidia single-handedly in the highend, they'd make a chip as big as 8800GTX/2900XT, smaller than the GT200 but performance? Truely owned.
AMD's being the smart guy here, choosing the right architecture.
Really?
What's a Phenom (Barcelona) then?
There's the problem with needing both GDDR5 and a >256-bit memory bus for enough bandwidth though. The former hurts prices, the latter hurts ATI's chip size more because the ringbus itself is quite huge. The "SP"s, TMUs are much cheaper though. I might add, cheap enough for nVidia to be very afraid.
These cards can do double precision floating point using all their existing SP units while nVidia had to add in new ones to get a slower DP speed cap (200GFLOPS on RV770 vs 70-something GFLOPS on GT200 if my memory is right)
Yeah they've admitted to that so that's nothing new. I think their stratergy is a very good one. A card that rivals a 9800GTX for $200 sounds terrific.
They simply can't compete with the high end so they are bringing out cards that perfrom very well and are cheap to make (and therefore cheap for us).
So long as we the consumer wins, i'm happy.
Like I said, read the article on EETimes about the launch of the 48xx's and the GTX200's: here
Some quotes:
I have definitely heard rumblings that Nvidia is moving towards more of an ATI-esque approach to this, though to what extent I do not know. I know that most GPU manufacturers lay the specs for generations long in advance (for example, G80 was supposedly designed years before they ever went into actual production) so that part I do believe in, but I also believe their economic situation + failure of the R600 pushed them down this path a lot faster/deeper than anticipated.Quote:
"We didn't want to come out with one monolithic GPU and then disable parts of it for different markets," said an AMD spokesman prior to a full disclosure of the part in a briefing in San Francisco on June 16.
The strategy makes sense for the financially troubled AMD which also has laid out conservative road maps for its computer processors. The graphics choice reduces costs and risks while maximizing returns for the company which has suffered through multiple loss-making quarters.
The decision to use a two-chip strategy for the high end was made more than two years ago, based on an analysis of yields and scalability. It was not related to AMD's recent financial woes, said Rick Bergman, general manager of AMD's graphics division.
"I predict our competitor will go down the same path for its next GPU once they see this," Bergman said. "They have made their last monolithic GPU."
It might have to do with TSMC can't get good yields on that big of a die, so they have to go smaller dies. From the size of the die, I'm pretty sure it won't have 800sp, probably the originally rumored 480sp would be the case.
You heard those very rumblings from AMD. What do you expect? They want to paint themselves as industry leaders. Every company in every industry does this. Only some are right though. It's harder/more expensive to go bigger. It's easier/cheaper to relegate to lower end chips.
It has to have 800SP if it reaches 1TFlop with the 4850 unless there's a 1GHz shader domain we haven't seen, but that wasn't detected by the latest version of GPU-Z by w1z so I'm believing its 800SP's
P.S. die size isn't an indicator of how many SP's are in there since we don't know exactly how much each SP actually takes up... looking at the GT200 die shots, it doesnt even use as much as we think
Right, a lot of it is PR, but I can also see some of the logic behind it since neither company has fabs for their GPUs, they really are at the mercy of foundry's schedules for processes. So if yields on monolithic GPU's arent up to par at the best possible process you can afford, its a risky proposition to put all your marbles in one bag no matter how well off you are as a company.
BTW those rumblings on Nvidia weren't from an AMD source at all, it was actually an Nvidia source and I know some of it was published somewhere recently but lost in the shuffle of the GT200 launch