Page 9 of 42 FirstFirst ... 678910111219 ... LastLast
Results 201 to 225 of 1028

Thread: AMD Radeon HD 6870 and HD 6850 confirmed to be launched on 22.10.2010

  1. #201
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by Baron_Davis View Post
    To those saying how informed buyers will KNOW the 6870 is slower than their 5870, you are being very short sighted. This isn't about the <1% (ppl on XS) that build their own systems, this is about the mainstream. BestBuy and jabroni stores like it will have desktops with 6870's, and laptops will start carrying 6870M and random Joe's will buy it, see the 6870, see another product with a 5850 or 5870 and immediately think "mine is faster". If you don't comprehend this type of thinking, you must live alone in a cave. ANY NORMAL, NON-TECHNICAL PERSON ASSUMES A BIGGER NUMBER MEANS FASTER.

    Dude, normal people don't buy video cards every 17 months... your argument of "protecting the consumer" is weak.


    Example:
    This Xmas, people are going to want to buy a $150 Graphics Card... they don't care one sh!z about the name of the card, just how good it is against all other cards in their price range.

    None of them are going to look at the pedigree/genealogy of how the card got it's name... the average Joe doesn't care. Only concern is "what plays Battlefield the fastest, for the least amount" .. is all they care about.
    (ie: price/performance)

    Matter of fact, that logical ratio^ is all anyone cares about... unless you are a disgruntled nv fanboi.

  2. #202
    Xtreme Member
    Join Date
    Oct 2005
    Posts
    354
    I just noticed an interesting post, I think this is from a reviewer.

    Grain of salt though.

    "MSRP of $179 and $229 respectively, matching Nvidia's GTX 460 pricing"


    Cheapest on NE now is:
    GTX 460 768MB ~$140.00
    GTX 460 1GB ~$200.00

    Both after MIR
    i7 920 @ 4.0GHz
    Scythe MUGEN-2 with Push/Pull
    Gigabyte EX58 UD5
    3X2GB G-Skill DDR3
    Sapphire 5870 1GB Vapor-X
    OCZ Agility 120GB
    24" Acer HDMI LCD
    Corsair TX850
    Lian-Li PC-V1000

  3. #203
    Xtreme Enthusiast
    Join Date
    Jan 2010
    Posts
    533
    Quote Originally Posted by BeepBeep2 View Post
    No, it's 775/???? with 800 SP...

    6870 will have 960 @ 850/????
    How sure are you?

    My source tells me, that HD 6850 has 960SP and HD 6870 1120SP.


    Btw. could all of you drop the freaking rename discussion...it's seriously getting out of hand...

  4. #204
    Xtreme Addict
    Join Date
    May 2009
    Location
    Switzerland
    Posts
    1,972
    Well can compare price only with Official prices, cause ofc there's a lot of difference between country, shop, special offers etc.. stock liquidation
    CPU: - I7 4930K (EK Supremacy )
    GPU: - 2x AMD HD7970 flashed GHZ bios ( EK Acetal Nickel Waterblock H2o)
    Motherboard: Asus x79 Deluxe
    RAM: G-skill Ares C9 2133mhz 16GB
    Main Storage: Samsung 840EVO 500GB / 2x Crucial RealSSD C300 Raid0

  5. #205
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    I love how people are throwing conclusive numbers around from leaked slides that from way back, possibly faked, with performance numbers with no reference

    Serious advice... don't waste your time with all the info due after this weekend

  6. #206
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by BeepBeep2 View Post
    Because his game runs slower.

    Oh, and cool ego dude.



    HD5750's are now to be called HD6750, and HD5770's will be renamed to HD6770. It just adds on to the pile of misleading trash...


    Okay, I give up.

    I guess I wasn't satisfied with the high core clock combined with almost no increase in performance at the same price point. The fact that it saves a considerable amount of power over the 5-Series (and apparently GTX4xx) is great...

    Old 6770 slide:


    New Barts XT slide:


    Note-

    Barts XT: Core Clock has gone down 50 Mhz to 850 Mhz from 900, SP's were reduced from 1280 to 960, Texture Units reduced from 64 to 48, as well as board power going marginally up. Board power moves from 146w vs (greater than) >150w, ... Remember, the 5850's max board power is 151w.

    Barts PRO: Core Clock is now listed at 700-725 Mhz (was 725), SP's were reduced from 1120 to 800, Texture Units dropped from 56 to 40, and board power went from 114w to (less than) <150w.

    You guys have to realize that Barts XT has changed a bit recently.
    The HD6770 listed in leaked slides has the card at 900 Mhz core, with 1280 Stream Processors, 64 texture units.

    1 multiply+add "mad" (2 FP) ops per cycle * number of stream processors (1280) * core clock (900 Mhz) equals the 2.304 TFlops as that slide states.

    This was the 6770, and it's specs are still worse than the "current" 5870. It's closer to the 5850 in performance. (Which is acceptable at a $250 price point.) Pay very close attention to the max board power for Barts XT on this slide. 146w.

    The black slide I have above shows Barts XT as 850 Mhz core, with 960 Stream Processors, 48 texture units @ greater than 150w. Again, the 5850's max board power is 151w.
    Those specs leave us with 1 multiply+add (2 FP) ops per cycle * number of stream processors (960) (which is cut down by 320 from 1280) * core clock (850 Mhz)
    = 1.632 TFLops of Single Precision.

    At a $260 price point, that leaves us with something between a 5770 and 5850. (Remember, 5850's can be had for $260 at this time.)

    Performance wise (Single Precision only, and ONLY Single Precision) it is equal to a 5770 at 1020 Mhz.
    HD5770 (Can be had for $135, or $125 w/ rebate on Newegg.com on Oct. 15 2010):
    10 SIMD's
    2 FP ops * 800 SP's * 850 Mhz Core Clock = 1.36 TFLops/s SP
    40 Texture Units * 850 Mhz Core Clock = 34 GTexel/s
    16 ROP's * 850 Mhz Core Clock = 13.6 GPixel/s
    64 Z/Stencils * 850 Mhz Core Clock = 54.4 GSamples/s
    1200 Mhz GDDR5 * 128 bit bus / 2 times = 76.8 GB/s
    Max board power: 108w

    Overclocked HD5770 @ 1020/1445 ($140 to $160 for voltage control):
    10 SIMD's
    2 FP ops * 800 SP's * 1020 Mhz Core Clock = 1.632 TFLops/s SP
    40 Texture Units * 1020 Mhz Core Clock = 40.8 GTexel/s
    16 ROP's * 1020 Mhz Core Clock = 16.32 GPixel/s
    64 Z/Stencils * 1020 Mhz Core Clock = 65.28 GSamples/s
    1200 Mhz GDDR5 * 128 bit bus / 2 times = 76.8 GB/s
    Or for fun: 1445 Mhz GDDR5 * 128 bit bus / 2 times = 92.5 GB/s

    HD6850 (Barts PRO) from black slide priced at $200 (?):
    10 SIMD's
    2 FP Ops * 800 SP's * 700-725 Mhz Core Clock = 1.12 TFLops SP to 1.16 TFLops SP
    40 Texture Units * 700-725 Mhz Core Clock = 28 GTexel/s to 29 GTexel/s
    32 ROP's * 700-725 Mhz Core Clock = 22.4 GPixel/s to 23.2 GPixel/s
    128 Z/Stencils * 700-725 Mhz 89.6 GSamples/s to 92.8 GSamples/s
    1000 Mhz GDDR5 * 256 bit bus / 2 times = 128 GB/s
    Max board power: (less than) <150w

    New HD6850 (Barts PRO) priced around $200 to $225:
    10 SIMD's
    2 FP Ops * 800 SP's * 775 Mhz Core Clock = 1.24 TFLops SP
    40 Texture Units * 775 Mhz Core Clock = 31 GTexel/s
    32 ROP's * 775 Mhz Core Clock = 24.8 GPixel/s
    128 Z/Stencils * 775 Mhz = 99.2 GSamples/s
    1000 Mhz GDDR5 (?) * 256 bit bus / 2 times = 128 GB/s
    1200 Mhz GDDR5 (?) * 256 bit bus / 2 times = 153.6 GB/s (I doubt we will see this on a midrange card)

    HD5850 (currently selling for $260 @ Newegg.com on Oct. 15 2010):
    18 SIMD's
    2 FP Ops * 1440 SP's * 725 Mhz Core Clock = 2.088 TFLops SP
    72 Texture Units * 725 Mhz Core Clock = 52.2 GTexel/s
    32 ROP's * 725 Mhz Core Clock = 23.2 GPixel/s
    128 Z/Stencils * 725 Mhz Core Clock = 92.8 GSamples/s
    1000 Mhz GDDR5 * 256 bit bus / 2 times = 128 GB/s.
    Maximum Board Power = 151w

    HD5850 Overclocked to 1000/1300
    18 SIMD's
    2 FP ops * 1440 SP's * 1000 Core Clock = 2.88 TFLops SP (note, this is nearly twice as fast as the stock 6870 below)
    72 Texture Units * 1000 Core Clock = 72 GTexel/s
    32 ROP's * 1000 Core Clock = 32 GPixel/s
    128 Z/Stencils * 1000 Core Clock = 128 GSamples/s
    1300 Mhz GDDR5 * 256 bit bus / 2 = 166.4 GB/s

    HD6870 (Barts XT) from the black slide ($260?):
    12 SIMD's
    2 FP ops * 960 SP's * 850 Mhz Core Clock = 1.632 TFLops SP
    48 Texture Units * 850 Mhz Core Clock = 40.8 GTexel/s
    32 ROP's * 850 Mhz Core Clock = 27.2 GPixel/s
    128 Z/Stencils (as noted on black chart) * 850 Mhz Core Clock = 108.8 GSamples/s
    1050 Mhz GDDR5 * 256 bit bus / 2 = 134.4 GB/s
    Max board power: (greater than) >150w

    Theoretical HD6870 with completely insane overclock @ 1100/1450:
    12 SIMD's
    2 FP ops * 960 SP's * 1100 Mhz Core Clock = 2.112 TFLops SP
    48 Texture Units * 1100 Mhz Core Clock = 42.8 GTexel/s
    32 ROP's * 1100 Mhz Core Clock = 35.2 GPixel/s
    128 Z/Stencils * 1100 Mhz Core Clock = 140.8 GSamples/s
    1450 Mhz GDDR5 * 256 bit bus / 2 = 185.6 GB/s

    HD5870 (currently selling for $350, as low as $310 with rebate on Newegg.com as of Oct. 15 2010):
    20 SIMD's
    2 FP ops * 1600 SP's * 850 Mhz Core Clock = 2.72 TFLops SP
    80 Texture Units * 850 Mhz Core Clock = 68.0 GTexel/s
    32 ROP's * 850 Mhz Core Clock = 27.2 GPixel/s
    128 Z/Stencils * 850 Mhz Core Clock = 108.8 GSamples/s
    1200 Mhz GDDR5 * 256 bit bus / 2 = 153.6 GB/s
    Max board power: 188w

    Overclocked 5870 @ 1000/1325 :
    20 SIMD's
    2 FP ops * 1600 SP's * 1000 Mhz Core Clock = 3.20 TFLops SP
    80 Texture Units * 1000 Mhz Core Clock = 80.0 GTexel/s
    32 ROP's * 1000 Mhz Core Clock = 32.0 GPixel/s
    128 Z/Stencils * 1000 Mhz Core Clock = 128.0 GSamples/s
    1325 Mhz GDDR5 * 256 bit bus / 2 = 169.7 GB/s

    Also keep in mind that new Barts 6800 Series cards only feature one crossfire interconnect.


    How does any of that jibberish^, bolster your arguement, that the GTX 460 is a better buy, than the 6000 series?
    Last edited by Xoulz; 10-15-2010 at 10:29 AM.

  7. #207
    Xtreme Enthusiast
    Join Date
    Jan 2010
    Posts
    533
    You had to quote the whole thing?

  8. #208
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by Xoulz View Post


    How does any of that jibberish^, bolster your arguement, that the GTX 460 is a better buy, than the 6000 series?
    cause his argument was that 6800 is worse than 5800 in many ways
    2500k @ 4900mhz - Asus Maxiums IV Gene Z - Swiftech Apogee LP
    GTX 680 @ +170 (1267mhz) / +300 (3305mhz) - EK 680 FC EN/Acteal
    Swiftech MCR320 Drive @ 1300rpms - 3x GT 1850s @ 1150rpms
    XS Build Log for: My Latest Custom Case

  9. #209
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by Vardant View Post
    You had to quote the whole thing?
    Makes my subtle point, all that more meaningful.

  10. #210
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by Manicdan View Post
    cause his argument was that 6800 is worse than 5800 in many ways
    It's an illogical argument.

    The 6800 series, is not priced the same as the 5800 series.

  11. #211
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by Xoulz View Post
    It's an illogical argument.

    The 6800 series, is not priced the same as the 5800 series.
    his argument is that a 6870 will have a worse price to perf, and power to perf, ratio than a 5850. which is a very valid argument to have if true. i dont care if its called a "sh*ty870" no new generation should be worse than the old in every way. time will tell how true his numbers are though
    2500k @ 4900mhz - Asus Maxiums IV Gene Z - Swiftech Apogee LP
    GTX 680 @ +170 (1267mhz) / +300 (3305mhz) - EK 680 FC EN/Acteal
    Swiftech MCR320 Drive @ 1300rpms - 3x GT 1850s @ 1150rpms
    XS Build Log for: My Latest Custom Case

  12. #212
    Xtreme 3D Team
    Join Date
    Jan 2009
    Location
    Ohio
    Posts
    8,499
    Quote Originally Posted by Vardant View Post
    You had to quote the whole thing?
    Seriously, I lol'ed when he quoted the whole thing and called it jibberish. Not my fault he can't process that many numbers.

    Quote Originally Posted by Manicdan View Post
    cause his argument was that 6800 is worse than 5800 in many ways
    Well, it was 5850 > Barts XT in many ways, and on paper, CPC, Barts gets the crap beat out of it by Cypress.

    Quote Originally Posted by Xoulz View Post
    It's an illogical argument.

    The 6800 series, is not priced the same as the 5800 series.
    How is

    5850 > Barts XT
    Illogical in any way?

    The 5850 and 6870 will be in the same price segment. The fact that 5800 series prices will go down as well just makes my arguement more valid.


    Quote Originally Posted by Manicdan View Post
    his argument is that a 6870 will have a worse price to perf, and power to perf, ratio than a 5850. which is a very valid argument to have if true. i dont care if its called a "sh*ty870" no new generation should be worse than the old in every way. time will tell how true his numbers are though
    Even if it performs the same, it won't matter, because the 5850 will be cheaper.

    I've had at least five people flame and troll me about this, telling me I'm ignorant and making me look stupid. How stupid do I look now?

    Barts XT Vs 5850:

    HD6870 (Barts XT) from the black slide ($260?):
    12 SIMD's
    2 FP ops * 960 SP's * 850 Mhz Core Clock = 1.632 TFLops SP
    48 Texture Units * 850 Mhz Core Clock = 40.8 GTexel/s
    32 ROP's * 850 Mhz Core Clock = 27.2 GPixel/s
    128 Z/Stencils (as noted on black chart) * 850 Mhz Core Clock = 108.8 GSamples/s
    1050 Mhz GDDR5 * 256 bit bus / 2 = 134.4 GB/s
    Max board power: (greater than) >150w

    vs


    HD5850 (currently selling for $260 @ Newegg.com on Oct. 15 2010):
    18 SIMD's
    2 FP Ops * 1440 SP's * 725 Mhz Core Clock = 2.088 TFLops SP
    72 Texture Units * 725 Mhz Core Clock = 52.2 GTexel/s
    32 ROP's * 725 Mhz Core Clock = 23.2 GPixel/s
    128 Z/Stencils * 725 Mhz Core Clock = 92.8 GSamples/s
    1000 Mhz GDDR5 * 256 bit bus / 2 times = 128 GB/s.
    Maximum Board Power = 151w
    Last edited by BeepBeep2; 10-15-2010 at 10:49 AM.
    Smile

  13. #213
    Xtreme Addict
    Join Date
    May 2009
    Location
    Switzerland
    Posts
    1,972
    As we have no idea of the actual price or perf and watts.. it's maybe a bit early no ? or should we do like Fudzilla and write an article as HD6K is the Fermi of AMD ?
    CPU: - I7 4930K (EK Supremacy )
    GPU: - 2x AMD HD7970 flashed GHZ bios ( EK Acetal Nickel Waterblock H2o)
    Motherboard: Asus x79 Deluxe
    RAM: G-skill Ares C9 2133mhz 16GB
    Main Storage: Samsung 840EVO 500GB / 2x Crucial RealSSD C300 Raid0

  14. #214
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by Xoulz View Post
    Dude, normal people don't buy video cards every 17 months... your argument of "protecting the consumer" is weak.


    Example:
    This Xmas, people are going to want to buy a $150 Graphics Card... they don't care one sh!z about the name of the card, just how good it is against all other cards in their price range.

    None of them are going to look at the pedigree/genealogy of how the card got it's name... the average Joe doesn't care. Only concern is "what plays Battlefield the fastest, for the least amount" .. is all they care about.
    (ie: price/performance)

    Matter of fact, that logical ratio^ is all anyone cares about... unless you are a disgruntled nv fanboi.
    If people don't care about naming, why did people go crazy over the Nvidia naming fiasco? The pricing of the cards lowered with each renaming and it was priced to what it was competing against. The same rules should apply to that product too regardless of the change in architecture.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  15. #215
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    the 3870 was slower than a 2900xt last time i checked. but i guess thats ok cause it had better prices and power consumption, and cause it went from x9xx to x8xx

    so what if they called these the 6860 and 6840? would people stop crying?
    2500k @ 4900mhz - Asus Maxiums IV Gene Z - Swiftech Apogee LP
    GTX 680 @ +170 (1267mhz) / +300 (3305mhz) - EK 680 FC EN/Acteal
    Swiftech MCR320 Drive @ 1300rpms - 3x GT 1850s @ 1150rpms
    XS Build Log for: My Latest Custom Case

  16. #216
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    464
    Quote Originally Posted by tajoh111 View Post
    If people don't care about naming, why did people go crazy over the Nvidia naming fiasco? The pricing of the cards lowered with each renaming and it was priced to what it was competing against. The same rules should apply to that product too regardless of the change in architecture.
    barts is a new gpu is it not, how can you rename something b4 it's named the first time

  17. #217
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by Eagleclaw View Post
    But the 6870 will be what? $50-$75 cheaper than the 5870?

    So ATI just saved dumbass Joe that much money.

    But the bigger point being you can't mother everyone like the government
    and blame everyone but yourself if you are not smart enough to google a review.

    If Average Joe is not smart enough to google reviews are you going to baby sit him when he
    goes out and gets that new washer and dryer at Sears and the sales person talks him into the worst model?

    Sorry but no amount of labeling is going to stop stupid people from doing stupid things.

    Warning this coffee is HOT!


    edit:

    On the serious side. I really do see your point about higher numbers, but come on at leased wait for the
    full lineup to be released before complaining about it.

    Sure until the 5870's are off the shelves this is going to cause some problems but oh well.


    edit 2:

    On this comment "Joe's will buy it, see the 6870, see another product with a 5850 or 5870 and immediately think "mine is faster"."

    How will he ever know it's not faster? He would have to read a review, oh wait he doesn't read reviews. See that is why I don't think
    you are making much of a point with those comments.
    The problem is not when when they bring 5870 level performance down to 259 dollars, it when they bring 5850 level performance again at 259 dollars. I have a hard time believing that a 960 shader card will compete with a 1600 shader card that is 50% bigger at similar clocks. In addition the leak vantage number seem very believable with a 960 shader part.

    Quote Originally Posted by bill_d View Post
    barts is a new gpu is it not, how can you rename something b4 it's named the first time
    I guess it is not that much renaming, as much as it is deceptive naming. Both NV renaming and AMD name shifting are doing the same thing, that is to make the consumer to believe they are getting something faster than they really are. The difference is, AMD part is cheaper to produce(compared to the 5850) and it sounds like they are charging the same price of admission.
    Last edited by tajoh111; 10-15-2010 at 11:09 AM.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  18. #218
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by BeepBeep2 View Post
    Seriously, I lol'ed when he quoted the whole thing and called it jibberish. Not my fault he can't process that many numbers.



    Well, it was 5850 > Barts XT in many ways, and on paper, CPC, Barts gets the crap beat out of it by Cypress.



    How is



    Illogical in any way?

    The 5850 and 6870 will be in the same price segment. The fact that 5800 series prices will go down as well just makes my arguement more valid.




    Even if it performs the same, it won't matter, because the 5850 will be cheaper.

    I've had at least five people flame and troll me about this, telling me I'm ignorant and making me look stupid. How stupid do I look now?

    Barts XT Vs 5850:

    HD6870 (Barts XT) from the black slide ($260?):
    12 SIMD's
    2 FP ops * 960 SP's * 850 Mhz Core Clock = 1.632 TFLops SP
    48 Texture Units * 850 Mhz Core Clock = 40.8 GTexel/s
    32 ROP's * 850 Mhz Core Clock = 27.2 GPixel/s
    128 Z/Stencils (as noted on black chart) * 850 Mhz Core Clock = 108.8 GSamples/s
    1050 Mhz GDDR5 * 256 bit bus / 2 = 134.4 GB/s
    Max board power: (greater than) >150w

    vs


    HD5850 (currently selling for $260 @ Newegg.com on Oct. 15 2010):
    18 SIMD's
    2 FP Ops * 1440 SP's * 725 Mhz Core Clock = 2.088 TFLops SP
    72 Texture Units * 725 Mhz Core Clock = 52.2 GTexel/s
    32 ROP's * 725 Mhz Core Clock = 23.2 GPixel/s
    128 Z/Stencils * 725 Mhz Core Clock = 92.8 GSamples/s
    1000 Mhz GDDR5 * 256 bit bus / 2 times = 128 GB/s.
    Maximum Board Power = 151w
    lulz..

    The 5850 is EOL dude.. just like the 4870..

    Secondly, as most here have suggested, the Barts is a less expensive price point than the Cypress. We don't know the exact details, but how does any of your remedial mathematics of pseudo facts, prove anything? What is your base?

    Lastly, how does any of this bolster your over-all point, that the GTX 460 is a better buy?

  19. #219
    Xtreme Mentor
    Join Date
    May 2008
    Location
    cleveland ohio
    Posts
    2,879
    ROPs are separate from shader count ?

    I could have swear they where linked somewhere.

    also it's like count shaders don't even matter their either.
    Last edited by demonkevy666; 10-15-2010 at 11:45 AM.
    HAVE NO FEAR!
    "AMD fallen angel"
    Quote Originally Posted by Gamekiller View Post
    You didn't get the memo? 1 hour 'Fugger time' is equal to 12 hours of regular time.

  20. #220
    Xtreme 3D Team
    Join Date
    Jan 2009
    Location
    Ohio
    Posts
    8,499
    Quote Originally Posted by Xoulz View Post
    lulz..

    The 5850 is EOL dude.. just like the 4870..

    Secondly, as most here have suggested, the Barts is a less expensive price point than the Cypress. We don't know the exact details, but how does any of your remedial mathematics of pseudo facts, prove anything? What is your base?

    Lastly, how does any of this bolster your over-all point, that the GTX 460 is a better buy?
    First of all, what is your source as for the 5850 being EOL?

    Secondly, tell me how you know that Barts 6800 series cards will be cheaper than Cypress. As far as I've seen, pricing will be from the $175 to $250-260 range for either Barts PRO and Barts XT.

    "...remedial mathematics of pseudo facts"...assuming you mean remedial in such a way that I would dumb it down for slow learners, such as retards? pseudo as in false?
    What part of my facts are false, or pretended? Everything I listed has been said before within AMD, and has also obviously been released to the public. I even threw in numbers for overclocked cards. I simply stated that there has been confusion about Barts and it's performance figures. Obviously parts and peices were cut down considerably as time went along.

    Elaborate on your claims.

    Lastly, I said that the GTX460 was a better buy than the HD5850 at the moment, due to it being $50 cheaper and having much better tesselation performance. I did not stress this as something important, nor was it ever my over all point.


    Troll

    Quote Originally Posted by tajoh111 View Post
    The problem is not when when they bring 5870 level performance down to 259 dollars, it when they bring 5850 level performance again at 259 dollars. I have a hard time believing that a 960 shader card will compete with a 1600 shader card that is 50% bigger at similar clocks. In addition the leak vantage number seem very believable with a 960 shader part.
    Oh my god, common sense. :O
    Last edited by BeepBeep2; 10-15-2010 at 11:58 AM.
    Smile

  21. #221
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    Quote Originally Posted by tajoh111 View Post
    If people don't care about naming, why did people go crazy over the Nvidia naming fiasco? The pricing of the cards lowered with each renaming and it was priced to what it was competing against. The same rules should apply to that product too regardless of the change in architecture.
    renaming the same part as an upgrade was the problem with NV, and the mobile specifically with telling the people that the 285 mobile as a new part over the 9800m and they made it sound that it was related to the g200 and not just a g92.

    this is not know if its better or not, no1 has a bench for games and its supposed to be a new arch on the sahders. i have no doubts that this will be in the same range, but unless they are moving to using the x2 instead of the x9xx then its stupid, but if they are moving the x9xx to be a single strong part then having the x2 name back then i dont see a problem if its faster than a non overclocked 58xx.

    Quote Originally Posted by Manicdan View Post
    the 3870 was slower than a 2900xt last time i checked. but i guess thats ok cause it had better prices and power consumption, and cause it went from x9xx to x8xx

    so what if they called these the 6860 and 6840? would people stop crying?
    the 3870 stock clocks were alot faster than the 2900xt for gaming and especially with AA, the problem was that the 3870 did not clock the 2900, but if u then look at a 4890 it comes right on a 5850 if u oc the 4890 but not the 5850.
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  22. #222
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    I repeat again:

    I love it when folks pull out slides and leaked numbers that may be fake, and use it as their basis for a large drawn out analysis.

    As far as whether Cypress is going EOL... its simple:

    1) The actual leaked slides (recent ones) are the most accurate/likeliest true, and they show Juniper lasting for the next quarter, but Barts and Cayman replacing Cypress

    2) Cypress is the least likely to be profitable given its die size, and is redundant if Barts and Cayman exist since those take the bottom and top of Cypress

    But hey, if you guys like wasting time analyzing something that has little concrete info, carry on

    Quote Originally Posted by tajoh111 View Post
    The problem is not when when they bring 5870 level performance down to 259 dollars, it when they bring 5850 level performance again at 259 dollars. I have a hard time believing that a 960 shader card will compete with a 1600 shader card that is 50% bigger at similar clocks. In addition the leak vantage number seem very believable with a 960 shader part.
    First of all, why is it hard to believe that a 960 shader card can compete if its a new architecture?

    Many people have claimed that Cypress is inefficient, and its possible they were right - Barts and Cayman might very well prove that

    Allow me to quote B3d:

    At Chiphell was posted an interesting 3DMark Vantage screen which is supposed to be from a HD 6850. The interesting stuff are the feature test.


    Feature Test 1 (texture) & Feature Test 6 (perlin noise ALUs) ~ HD 5750 level
    Feature Test 2 (Pixel ROPs) & Feature Test 5 (GPU particles, with high vertex shader load) ~ HD 5850 level
    Now why is that interesting? Because its possible the retooled architecture fixed up inefficiencies and bottlenecks. We all know that feature tests can test the theoretical abilities of an architecture - so yes, it might very well have "just" 800/960 SP's. But it might be utilizing them far more efficiently - after all, 1600 SP's dont mean squat if you can only utilize 800 of them.

    Also, if they are getting full GF104 performance out of 2/3 the die size with Barts, Cayman is going to be far faster than Cypress...

    Finally, I remember when RV770 was about to be released. Leaked slides showed the 4850 competing with the 8800GT, and the 4870 competing with the 9800 GTX.

    People proclaimed the doom of ATI with G200 released two weeks earlier. Turns out the joke was on the Nvidia's fans.

    Will this be a repeat? I don't know, but ATI has had a habit of downplaying their cards... particularly the ones people have little concrete info on

  23. #223
    Xtreme 3D Team
    Join Date
    Jan 2009
    Location
    Ohio
    Posts
    8,499
    Quote Originally Posted by zerazax View Post
    ...
    I didn't say anything like "ZOMG IT'S ALL A FAILURE" and I did realize that the slides could have been fake. I just decided to put the info out there to show people what it MIGHT be like.

    Again, I've owned a lot of ATI cards and I'm still going to buy more.
    Smile

  24. #224
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by zanzabar View Post
    renaming the same part as an upgrade was the problem with NV, and the mobile specifically with telling the people that the 285 mobile as a new part over the 9800m and they made it sound that it was related to the g200 and not just a g92.

    this is not know if its better or not, no1 has a bench for games and its supposed to be a new arch on the sahders. i have no doubts that this will be in the same range, but unless they are moving to using the x2 instead of the x9xx then its stupid, but if they are moving the x9xx to be a single strong part then having the x2 name back then i dont see a problem if its faster than a non overclocked 58xx.

    .
    But naming a part that implies its an upgrade is any better? How is naming a part which at this point likely has the performance of a 5850, a 6870 not deceptive?

    The 9800gt has a much lower MSRP than the 8800gt, so although their wasn't a change it technology, there was atleast a much lower price. The 6870 has the performance of a 5850 and the price of a 5850, but it's being called a 6870. I wouldn't worry about 5870 people upgrading to it, I could see people with a 4870 upgrading to it and being very disappointed.

    There might be new technology in the 6870, but I don't care for having 5 monitors(which is useless for gaming), but the technology itself is more about benefiting AMD than the consumer. It sounds like it about a third smaller and AMD is only lowering the price 10 dollars. I would be shocked if AMD made the 6870 fast as or faster than a 5870, because the 6970 has to justify it likely double price point somehow and being 30-40% faster is not going to cut it. The 6870 scoring sounds about right with a x7500 at 250 and a 6970 at 500+ at 12000. It just doesn't seem possible that bart xt would be 9000+ at 250 and cayman xt at 500+ at x12000.
    Last edited by tajoh111; 10-15-2010 at 12:33 PM.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  25. #225
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    Quote Originally Posted by tajoh111 View Post
    The problem is not when when they bring 5870 level performance down to 259 dollars, it when they bring 5850 level performance again at 259 dollars. I have a hard time believing that a 960 shader card will compete with a 1600 shader card that is 50% bigger at similar clocks. In addition the leak vantage number seem very believable with a 960 shader part.


    I guess it is not that much renaming, as much as it is deceptive naming. Both NV renaming and AMD name shifting are doing the same thing, that is to make the consumer to believe they are getting something faster than they really are. The difference is, AMD part is cheaper to produce(compared to the 5850) and it sounds like they are charging the same price of admission.
    If you didn't know what GPU was going into the performance segment then your only complaint would be that the 6870 is good enough over the 5870 to justify whatever price segment it's set at, which is valid and OK. This however, is not the same as renaming GPUs from generaiton to generation.
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

Page 9 of 42 FirstFirst ... 678910111219 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •