MMM
Page 23 of 33 FirstFirst ... 1320212223242526 ... LastLast
Results 551 to 575 of 812

Thread: AMD To Launch Radeon HD4890 "RV790" In April

  1. #551
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by ownage View Post
    If it doesn't have a separate shaderclock domain the shaders won't scale.
    Maybe thats the reason why 13% higher clocks means 6% performance boost.


    HD4890 is listed in Holland
    http://tweakers.net/pricewatch/23669...dr5-pci-e.html
    Huh? It will still scale linearly since the core speed would increase the ROPs, TMUs and shaders.
    That 6% increase is supposedly clock vs clock.

    Quote Originally Posted by Glow9 View Post
    Curious maybe someone can explain but why does the ATI card only have 256-bit memory interface, It has a bit more ram than the GTX yet the GTX has 448-bit memory interface?
    GDDR5. They can have a 256bit interface and still have similar bandwidth vs the wider buses on the G200 and GDDR3.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  2. #552
    Xtreme Enthusiast
    Join Date
    Feb 2005
    Posts
    970
    My wild guess here is that AMD knew there was fair, but not an overwhelming amount of improvements capable out of the RV770 core. So they poured the majority of engineering resources into RV870, which there are now rumours of an earlier than expected release. Still though, I bet the HD4890 will be a worthy successor to the HD4870 in a couple ways. However, there is a good chance that i'm lost out in left field here too.

  3. #553
    Registered User
    Join Date
    Nov 2008
    Posts
    72
    Quote Originally Posted by flippin_waffles View Post
    My wild guess here is that AMD knew there was fair, but not an overwhelming amount of improvements capable out of the RV770 core. So they poured the majority of engineering resources into RV870, which there are now rumours of an earlier than expected release. Still though, I bet the HD4890 will be a worthy successor to the HD4870 in a couple ways. However, there is a good chance that i'm lost out in left field here too.
    maybe GT300 and RV870 would go out around windows 7 final.
    i could be wrong but thats what i noticed when vista came and new cards was out, i think theire connected somehow.
    Last edited by Salsoolo; 03-21-2009 at 10:52 AM.

  4. #554
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by flippin_waffles View Post
    My wild guess here is that AMD knew there was fair, but not an overwhelming amount of improvements capable out of the RV770 core. So they poured the majority of engineering resources into RV870, which there are now rumours of an earlier than expected release. Still though, I bet the HD4890 will be a worthy successor to the HD4870 in a couple ways. However, there is a good chance that i'm lost out in left field here too.
    More like AMD realized how close RV770 was competing already with G200 and with the delays on 40nm, decided a RV790 would be a good filler until they could get RV870 out.

    40nm was suppose to be ready Q4 '08, was delayed a quarter, RV870 was a late Q2 target. Then was pushed back to Q3, with only a rumor of another delay pushig it back into Q4, even though most rumors said RV870 is still on track and looking like a Q3 launch.
    Last edited by LordEC911; 03-21-2009 at 10:59 AM.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  5. #555
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    Cairo
    Posts
    2,366
    Were do you guys read that 6% faster clock per clock ,,
    Internal benchmarks claim 6% faster than RV770
    that means that RV790 is 6% faster than RV770 ,, 850Mhz vs 750Mhz , it could even be 6% from 4870 to 4890
    Intel Core I7 920 @ 3.8GHZ 1.28V (Core Contact Freezer)
    Asus X58 P6T
    6GB OCZ Gold DDR3-1600MHZ 8-8-8-24
    XFX HD5870
    WD 1TB Black HD
    Corsair 850TX
    Cooler Master HAF 922

  6. #556
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by kemo View Post
    Were do you guys read that 6% faster clock per clock ,,

    that means that RV790 is 6% faster than RV770 ,, 850Mhz vs 750Mhz , it could even be 6% from 4870 to 4890
    Ummm... because 6% doesn't make sense...
    The only way it makes sense is if it is 6% increase clock per clock.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  7. #557
    Xtreme Guru
    Join Date
    Jan 2005
    Location
    Tre, Suomi Finland
    Posts
    3,858
    Quote Originally Posted by Glow9 View Post
    Curious maybe someone can explain but why does the ATI card only have 256-bit memory interface, It has a bit more ram than the GTX yet the GTX has 448-bit memory interface?
    To be precise;
    HD4870 has eight GDDR5 chips each 1Gbits (128MB) in capacity yielding total of 8×128MB=1GB and since each chip has a bus width of 32bits the whole bus combined is 8×32bit=256bit.
    GTX260 has 14 GDDR3 chips, each 512Mbits (64MB) totalling 14×64MB=896MB, total bus is 14×32bit=448bit. GTX280 has 16 chips and as such for the calculations: 14->16.

    Quote Originally Posted by kemo View Post
    Were do you guys read that 6% faster clock per clock
    that means that RV790 is 6% faster than RV770 ,, 850Mhz vs 750Mhz , it could even be 6% from 4870 to 4890
    It's not wise to believe everything you read. Fact is the 13% frequency increase alone will gain 13% more in performance.

    And come on, would it make sense producing a different silicon for mere 6% total performance increase. They could have cherrypicked age-old RV770s to get even more than 6%, cheaper and with less of a hassle.
    You were not supposed to see this.

  8. #558
    Xtreme Addict
    Join Date
    Feb 2007
    Location
    Arizona, USA
    Posts
    1,700
    Quote Originally Posted by largon View Post
    It's not wise to believe everything you read. Fact is the 13% frequency increase alone will gain 13% more in performance.

    And come on, would it make sense producing a different silicon for mere 6% total performance increase. They could have cherrypicked age-old RV770s to get even more than 6%, cheaper and with less of a hassle.
    True, it does seem like a bit of a hassle, but it is a "new" chip after all.


    Also, do we yet have confirmation of it being on the 55GT process?


    Core i7 920 D0 B-batch (4.1) (Kinda Stable?) | DFI X58 T3eH8 (Fed up with its' issues, may get a new board soon) | Patriot 1600 (9-9-9-24) (for now) | XFX HD 4890 (971/1065) (for now) |
    80GB X25-m G2 | WD 640GB | PCP&C 750 | Dell 2408 LCD | NEC 1970GX LCD | Win7 Pro | CoolerMaster ATCS 840 {Modded to reverse-ATX, WC'ing internal}

    CPU Loop: MCP655 > HK 3.0 LT > ST 320 (3x Scythe G's) > ST Res >Pump
    GPU Loop: MCP655 > MCW-60 > PA160 (1x YL D12SH) > ST Res > BIP 220 (2x YL D12SH) >Pump

  9. #559
    Registered User
    Join Date
    May 2005
    Location
    Australia
    Posts
    93
    Quote Originally Posted by Glow9 View Post
    Curious maybe someone can explain but why does the ATI card only have 256-bit memory interface, It has a bit more ram than the GTX yet the GTX has 448-bit memory interface?
    Quote Originally Posted by largon View Post
    To be precise;
    HD4870 has eight GDDR5 chips each 1Gbits (128MB) in capacity yielding total of 8×128MB=1GB and since each chip has a bus width of 32bits the whole bus combined is 8×32bit=256bit.
    GTX260 has 14 GDDR3 chips, each 512Mbits (64MB) totalling 14×64MB=896MB, total bus is 14×32bit=448bit. GTX280 has 16 chips and as such for the calculations: 14->16.

    -Snip-
    To add to what largon has said, that the 4870/4890's are using GDDR5, where as the GTX's are using GDDR3.

    The GDDR5 is quad pumped, so 900Mhz has four bits per clock cycle = 3600Mhz effective. 3600Mhz x 256 bit = ~921600Mbps = 900Gbps.

    The GDDR3 RAM is only two bits per clock cycle. So 999MHz x 2 = 1998MHz effective clock speed and 1998MHz x 448bit = 895104Mbps = ~874Gbps

    So you see, the total actual memory bandwidth is much more similar between the cards, its just that they use different methods to get there.

    Just for the hell of it I will also mention that from a card manufacturing point of view, the lower bus width (eg, 4870) is easier to implement - there is half the number of wire/traces to cram onto the circuit board with 256bit bus vs. a 512bit bus. This leads to a cheaper to produce card and is possibly part of the reason why the radeons are cheaper. It is also the technique that has been successfully used to make many a mid-range card. Create high end card, let manufacturing processes mature and cheapen, chop memory bandwidth in half and use a cheap but matured memory technology running at full steam and you have a decent performing mid range part. Anyway, enough lecturing, someone will correct me soon.

    -Muunsyr
    kuroikenshi: "I cannot believe that their flagship product will be New Zealand (they should make an all blacks version of it)"
    SexyMF: "I am stoked that I can get a decent GPU branded with my country "
    MrMojoZ: "Now all you need is electricty. "

  10. #560
    Xtreme Addict
    Join Date
    Aug 2004
    Location
    Austin, TX
    Posts
    1,346
    Beyond3D thinks that ATI upped the die size by a tiny amount, based on pixel counting from the leaked pictures. A small IPC increase looks very plausible.
    oh man

  11. #561
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Muunsyr View Post
    To add to what largon has said, that the 4870/4890's are using GDDR5, where as the GTX's are using GDDR3.

    The GDDR5 is quad pumped, so 900Mhz has four bits per clock cycle = 3600Mhz effective. 3600Mhz x 256 bit = ~921600Mbps = 900Gbps.

    The GDDR3 RAM is only two bits per clock cycle. So 999MHz x 2 = 1998MHz effective clock speed and 1998MHz x 448bit = 895104Mbps = ~874Gbps

    So you see, the total actual memory bandwidth is much more similar between the cards, its just that they use different methods to get there.

    Just for the hell of it I will also mention that from a card manufacturing point of view, the lower bus width (eg, 4870) is easier to implement - there is half the number of wire/traces to cram onto the circuit board with 256bit bus vs. a 512bit bus. This leads to a cheaper to produce card and is possibly part of the reason why the radeons are cheaper. It is also the technique that has been successfully used to make many a mid-range card. Create high end card, let manufacturing processes mature and cheapen, chop memory bandwidth in half and use a cheap but matured memory technology running at full steam and you have a decent performing mid range part. Anyway, enough lecturing, someone will correct me soon.

    -Muunsyr
    Yep, you forgot to divide by 8.
    4870 has 115.2Gbps of bandwidth and GTX260 has 111.9Gbps.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  12. #562
    Registered User
    Join Date
    May 2005
    Location
    Australia
    Posts
    93
    Quote Originally Posted by LordEC911 View Post
    Yep, you forgot to divide by 8.
    4870 has 115.2Gbps of bandwidth and GTX260 has 111.9Gbps.
    Not quite, dividing by 8 gives us GBps (gigabytes per second) as apposed to Gbps (Gigabits per second)

    (It must be late over there, because I also make that out to be 112.5GBps for the Radeon, not 115.2...)
    kuroikenshi: "I cannot believe that their flagship product will be New Zealand (they should make an all blacks version of it)"
    SexyMF: "I am stoked that I can get a decent GPU branded with my country "
    MrMojoZ: "Now all you need is electricty. "

  13. #563
    Registered User
    Join Date
    Nov 2008
    Posts
    54
    I hope the 4890 will be able to put pressure on the GTX285 and 275

  14. #564
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Muunsyr View Post
    Not quite, dividing by 8 gives us GBps (gigabytes per second) as apposed to Gbps (Gigabits per second)

    (It must be late over there, because I also make that out to be 112.5GBps for the Radeon, not 115.2...)
    Well, I have never seen anyone post those numbers you did...
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  15. #565
    Xtreme Addict
    Join Date
    Aug 2006
    Location
    The Netherlands, Friesland
    Posts
    2,244
    Last edited by ownage; 03-22-2009 at 05:55 AM.
    >i5-3570K
    >Asrock Z77E-ITX Wifi
    >Asus GTX 670 Mini
    >Cooltek Coolcube Black
    >CM Silent Pro M700
    >Crucial M4 128Gb Msata
    >Cooler Master Seidon 120M
    Hell yes its a mini-ITX gaming rig!

  16. #566
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Higher clocks and lower FPS? Seems like a possible driver problem for me. Then again, there should be no major changes in the driver area if only the clocks are changed. ...completely different story if the whole shader structure/amount is changed...

    Something is wrong there.

  17. #567
    Xtreme Mentor
    Join Date
    Nov 2005
    Location
    Devon
    Posts
    3,437
    Promising gains in FC2
    RiG1: Ryzen 7 1700 @4.0GHz 1.39V, Asus X370 Prime, G.Skill RipJaws 2x8GB 3200MHz CL14 Samsung B-die, TuL Vega 56 Stock, Samsung SS805 100GB SLC SDD (OS Drive) + 512GB Evo 850 SSD (2nd OS Drive) + 3TB Seagate + 1TB Seagate, BeQuiet PowerZone 1000W

    RiG2: HTPC AMD A10-7850K APU, 2x8GB Kingstone HyperX 2400C12, AsRock FM2A88M Extreme4+, 128GB SSD + 640GB Samsung 7200, LG Blu-ray Recorder, Thermaltake BACH, Hiper 4M880 880W PSU

    SmartPhone Samsung Galaxy S7 EDGE
    XBONE paired with 55'' Samsung LED 3D TV

  18. #568
    Registered User
    Join Date
    May 2005
    Location
    Australia
    Posts
    93
    Something does smell off.

    Given that one of these tests is clearly not indicative of the performance difference, this leaves us with one tests results which is hardly enough to start judging the card on. Especially so that different drivers are used for each card.

    Edit: Also, is that 850/2.200 and 750/1.800 the core/memory frequency of each card respectfully? If so, it would suggest that the 4890 card tested has 1100Mhz (4400Mhz effective) memory. That or the reviewer can't type properly.
    Last edited by Muunsyr; 03-22-2009 at 06:21 AM.
    kuroikenshi: "I cannot believe that their flagship product will be New Zealand (they should make an all blacks version of it)"
    SexyMF: "I am stoked that I can get a decent GPU branded with my country "
    MrMojoZ: "Now all you need is electricty. "

  19. #569
    Live Long And Overclock
    Join Date
    Sep 2004
    Posts
    14,058
    PCGH's early numbers are never spot on, but you can see a nice delta developing on the higher resolutions.

    Plus, I want to see this thing clocked.

    Perkam

  20. #570
    Xtreme Addict
    Join Date
    Dec 2008
    Location
    Sweden, Linköping
    Posts
    2,034
    Quote Originally Posted by Calmatory View Post
    Higher clocks and lower FPS? Seems like a possible driver problem for me. Then again, there should be no major changes in the driver area if only the clocks are changed. ...completely different story if the whole shader structure/amount is changed...

    Something is wrong there.
    There has to be, or else I will be the first one saying "ATI did an Nvidia..."
    SweClockers.com

    CPU: Phenom II X4 955BE
    Clock: 4200MHz 1.4375v
    Memory: Dominator GT 2x2GB 1600MHz 6-6-6-20 1.65v
    Motherboard: ASUS Crosshair IV Formula
    GPU: HD 5770

  21. #571
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    shouldnt they have tested with the same beta driver for both cards + what does it mean x64/x86 catalyst driver :/ like did they test one with 64 bit and the other card with 32bit
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  22. #572
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Those results from the 4890 do not come from PCGH and they say so in their thread. Take a look at the crysis results. There you will find a watermark from someone else.
    [SIGPIC][/SIGPIC]

  23. #573
    Banned
    Join Date
    Jan 2009
    Posts
    1,510
    i am having high hopes for the 4890 to compete with 285 and the 4890x2 compete with 295

  24. #574
    Live Long And Overclock
    Join Date
    Sep 2004
    Posts
    14,058
    Quote Originally Posted by ILikeCosmosS View Post
    i am having high hopes for the 4890 to compete with 285
    Unlikely.

    and the 4890x2 compete with 295
    Very likely.

    Perkam

  25. #575
    Xtreme Addict
    Join Date
    Nov 2007
    Posts
    1,195
    lol 4890 has standard fps in crysis there a problem there but far cry test is incredible nearly 25 percent

Page 23 of 33 FirstFirst ... 1320212223242526 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •