Does it look short or I dunno something looks different than the 4870
Printable View
Does it look short or I dunno something looks different than the 4870
News from Xfastest
http://www.xfastest.com/viewthread.php?tid=20277
They are selling it in Taiwan ( ASUS started selling HD4890 first , and both ASUS & Gigabyte ignoring NDA ??)
I heard rumor that they are for sales in Hong Kong as well
I will try to confirm that tonight :yepp:
Radeon HD4890 vs GeForce GTX275 on April 9
http://vr-zone.com/articles/radeon-h....html?doc=6779Quote:
Radeon HD 4890 (55nm RV790)
* HD 4890 is slated for launch on April 9th instead of 6th
* Cards have been shipped out to distributors and reviewers already
* 800 stream processors
* 1GB GDDR5 memory on 256-bit memory interface
* Clocks : 850MHz core and 975MHz memory
* Internal benchmarks claim 6% faster than RV770
* Expect to retail at USD$249
GeForce GTX 275 (55nm GT200)
* Expect to launch on April 9th as well
* No card samples yet even for AIC partners
* 240 shader processors
* 896MB GDDR3 memory on 448-bit memory interface
* Clocks : 633MHz core, 1404MHz memory and 1161MHz shader (Not final yet)
* Simulated benchmarks by partners reveals that it is slightly faster than GTX 260
* Expect to retail between US$229-$279
Since GPU performance scales linearly with freq and RV790 already runs 13% faster the author must have meant the quoted "6% faster" as clock-2-clock.
If it scales linearly
clock for clock we could see a 6% percent improvement
the reference clock of the RV770 and the RV790, 750MHz to 850Mhz respectively adds another 7%?
total of 13%?
but could the increase in GDDR5 frequency of 50Mhz affects the performance or is it negligible?
i think at the end of the day, i have played a few games where running my 4870's at 1000mhz core would of really helped...
so this is all good as far as i'm concearned...
2 weeks to go....
So, how fast after release does someone think he will be able to post results? :P
I've got a HD4870 now and am willing to buy the HD4890 right after release if there is enough performance gain. :)
If it doesn't have a separate shaderclock domain the shaders won't scale.
Maybe thats the reason why 13% higher clocks means 6% performance boost.
HD4890 is listed in Holland :D
http://tweakers.net/pricewatch/23669...dr5-pci-e.html
Curious maybe someone can explain but why does the ATI card only have 256-bit memory interface, It has a bit more ram than the GTX yet the GTX has 448-bit memory interface?
It uses gddr5 and doesn't need a wider bus.
Huh? It will still scale linearly since the core speed would increase the ROPs, TMUs and shaders.
That 6% increase is supposedly clock vs clock.
GDDR5. They can have a 256bit interface and still have similar bandwidth vs the wider buses on the G200 and GDDR3.
My wild guess here is that AMD knew there was fair, but not an overwhelming amount of improvements capable out of the RV770 core. So they poured the majority of engineering resources into RV870, which there are now rumours of an earlier than expected release. Still though, I bet the HD4890 will be a worthy successor to the HD4870 in a couple ways. However, there is a good chance that i'm lost out in left field here too.
More like AMD realized how close RV770 was competing already with G200 and with the delays on 40nm, decided a RV790 would be a good filler until they could get RV870 out.
40nm was suppose to be ready Q4 '08, was delayed a quarter, RV870 was a late Q2 target. Then was pushed back to Q3, with only a rumor of another delay pushig it back into Q4, even though most rumors said RV870 is still on track and looking like a Q3 launch.
Were do you guys read that 6% faster clock per clock ,,
that means that RV790 is 6% faster than RV770 ,, 850Mhz vs 750Mhz , it could even be 6% from 4870 to 4890 :confused:Quote:
Internal benchmarks claim 6% faster than RV770
To be precise;
HD4870 has eight GDDR5 chips each 1Gbits (128MB) in capacity yielding total of 8×128MB=1GB and since each chip has a bus width of 32bits the whole bus combined is 8×32bit=256bit.
GTX260 has 14 GDDR3 chips, each 512Mbits (64MB) totalling 14×64MB=896MB, total bus is 14×32bit=448bit. GTX280 has 16 chips and as such for the calculations: 14->16.
It's not wise to believe everything you read. Fact is the 13% frequency increase alone will gain 13% more in performance.
And come on, would it make sense producing a different silicon for mere 6% total performance increase. They could have cherrypicked age-old RV770s to get even more than 6%, cheaper and with less of a hassle.
:stick:
To add to what largon has said, that the 4870/4890's are using GDDR5, where as the GTX's are using GDDR3.
The GDDR5 is quad pumped, so 900Mhz has four bits per clock cycle = 3600Mhz effective. 3600Mhz x 256 bit = ~921600Mbps = 900Gbps.
The GDDR3 RAM is only two bits per clock cycle. So 999MHz x 2 = 1998MHz effective clock speed and 1998MHz x 448bit = 895104Mbps = ~874Gbps
So you see, the total actual memory bandwidth is much more similar between the cards, its just that they use different methods to get there.
Just for the hell of it I will also mention that from a card manufacturing point of view, the lower bus width (eg, 4870) is easier to implement - there is half the number of wire/traces to cram onto the circuit board with 256bit bus vs. a 512bit bus. This leads to a cheaper to produce card and is possibly part of the reason why the radeons are cheaper. It is also the technique that has been successfully used to make many a mid-range card. Create high end card, let manufacturing processes mature and cheapen, chop memory bandwidth in half and use a cheap but matured memory technology running at full steam and you have a decent performing mid range part. Anyway, enough lecturing, someone will correct me soon.
-Muunsyr
Beyond3D thinks that ATI upped the die size by a tiny amount, based on pixel counting from the leaked pictures. A small IPC increase looks very plausible.