Page 15 of 42 FirstFirst ... 51213141516171825 ... LastLast
Results 351 to 375 of 1035

Thread: The official GT300/Fermi Thread

  1. #351
    Xtreme Addict
    Join Date
    Jul 2009
    Posts
    1,023
    Quote Originally Posted by largon View Post
    GDDR6? What's that?
    I thought they had already set the mark for GDDR6, but I guess not.

  2. #352
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Helloworld_98 View Post
    I thought they had already set the mark for GDDR6, but I guess not.
    Not standardized yet.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  3. #353
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by Calmatory View Post
    I am very sure that MARS did not exceed 300 W power consumption in PCI SIG's tests. I have no clue about their testing methods, and I wouldn't be surprised if they just ran 3DMark06 and took the average consumption. Or then used their very own testing software and testbed.
    3DMark03 Nature at 1280x1024, 6xAA, 16xAF.
    Highest single reading during the test.
    315W
    Maximum: Furmark Stability Test at 1280x1024, 0xAA.
    453W(!!!)

    http://www.techpowerup.com/reviews/A...5_MARS/27.html

    so yes, they did exceed 300W, not by much but they did...
    and yes, of course its possible to use 2 8 pin connectors and go for more than 300W... and in reality you dont even need 2 8 pin connectors as highend mainboards can usually supply more than 100W through the slot and 8+6 can deliver more than 225W, its not like the psu will shut down if a card pulls more than 12.5A through a 8pin vga connector... yes, you break the spec... but who really cares? some cards are already pulling more than 100W through the pciE slot, and the spec only allows 75W...

    nvidia said gt300 power consumption will be comparable to the current gen... that doesnt really mean anything as there isnt much headroom left to go up... as we all know thermals are holding vgas back...

    Quote Originally Posted by LordEC911 View Post
    Not going to happen with 10% yields. They didn't even have enough silicon for their PR event...
    according to pcgh, nvidia told them that gt300 will go for a re-spin before itll be sold to retail iirc? but doing a respin in around 4 weeks? is that even possible? hot lots again?

  4. #354
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    So the PEAK was 315, average something like 280 W, and this in a single benchmark run. Hence the card falls to the under 300 W in games.

    My point is that there won't be a card which just uses more than 300 W constantly in games. Most probably there won't be a retail card exceeding Mars' power consumption. And as PCI-E standards should be backwards and forwards compatible, the 300 W standard limit isn't going anywhere anytime soon.

    So, all in all, this means that GF100 X2 card won't be much more power hog than the current X2 cards.

  5. #355
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    Quote Originally Posted by Calmatory View Post
    So the PEAK was 315, average something like 280 W, and this in a single benchmark run. Hence the card falls to the under 300 W in games.

    My point is that there won't be a card which just uses more than 300 W constantly in games. Most probably there won't be a retail card exceeding Mars' power consumption. And as PCI-E standards should be backwards and forwards compatible, the 300 W standard limit isn't going anywhere anytime soon.

    So, all in all, this means that GF100 X2 card won't be much more power hog than the current X2 cards.
    Nvidia it self is promoting CPU-GPU software that means the GF100 cores will be used in higher percentage than games. Other than that the GF100 seems to have 8 + 6 instead of 6 + 6, if rumors are to be believed and GF100 does eat around 230W spread around 8 + 6 and PCIe slot the X2 would most likely have 8 + 8 instead of 8 + 6 used in GTX 295 "If a shrunk is not used"
    Coming Soon

  6. #356
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by ajaidev View Post
    Nvidia it self is promoting CPU-GPU software that means the GF100 cores will be used in higher percentage than games. Other than that the GF100 seems to have 8 + 6 instead of 6 + 6, if rumors are to be believed and GF100 does eat around 230W spread around 8 + 6 and PCIe slot the X2 would most likely have 8 + 8 instead of 8 + 6 used in GTX 295 "If a shrunk is not used"
    Still the power consumption must remain under 300 W in PCI SIG's internal testing, otherwise the card does not meet PCI-Express standards and hence can not be sold as PCI-Express compicant device. Oh and there is only talk about 6+8 pin in the standard, as far as I know.
    Last edited by Calmatory; 10-03-2009 at 10:04 AM.

  7. #357
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by saaya View Post
    according to pcgh, nvidia told them that gt300 will go for a re-spin before itll be sold to retail iirc? but doing a respin in around 4 weeks? is that even possible? hot lots again?
    Well, I don't know much about hotlots other than how risky they are. Respins usually take 6-8weeks to get back from TSMC, which means they will get it back the middle of Nov.

    So, best case for Nvidia if this next spin is good to go and they start ramping production, they might get a handful of wafers ready for the end of the year, like I said before, maybe a couple hundred cards.

    Quote Originally Posted by ajaidev View Post
    Nvidia it self is promoting CPU-GPU software that means the GF100 cores will be used in higher percentage than games. Other than that the GF100 seems to have 8 + 6 instead of 6 + 6, if rumors are to be believed and GF100 does eat around 230W spread around 8 + 6 and PCIe slot the X2 would most likely have 8 + 8 instead of 8 + 6 used in GTX 295 "If a shrunk is not used"
    Again... you cannot use an 8+8pin on a "single" GPU, you will not pass PCI SIG certification.
    Last edited by LordEC911; 10-03-2009 at 10:04 AM.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  8. #358
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    Using 8 + 8 does not mean they will use 300w+ avg. The GF100 is suppose to have 8 + 6 and not use the combo to the max, having extra power on tap does not mean the card will use it, but in super intense situations extra power will come in handy "As is with MARS"
    Coming Soon

  9. #359
    Xtreme Member
    Join Date
    Aug 2007
    Location
    Montenegro
    Posts
    333
    Nvidia Admits Showing Dummy Fermi Card at GTC.

    http://www.xbitlabs.com/news/video/d...r_Q4_2009.html
    Internet will save the World.

    Foxconn MARS
    Q9650@3.8Ghz
    Gskill 4Gb-1066 DDR2
    EVGA GeForce GTX 560 Ti - 448/C Classified Ultra
    WD 1T Black
    Theramlright Extreme 120
    CORSAIR 650HX

    BenQ FP241W Black 24" 6ms
    Win 7 Ultimate x64

  10. #360
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    http://www.bit-tech.net/news/hardwar...-wasn-t-real/1

    tim says somebody showed him a picture of what is supposed to be a real fermi card, but he couldnt make out anything cause it was a mobile phone pic and basically more wires than pcb... sorry, i dont buy it...

    o check out our brand new fermi card! see! we DO have real next gen cards, and they are working fine!
    x huh? thats not a real card!
    o yes, this IS a real fermi *cough* card!
    x no its not! i hear something moving inside when shaking it!
    o ohhh i think you misunderstood me. I said its a real fermi prototype card... the retail cards will come later...
    x so you DONT have a real fermi card right now...
    o of course we do!
    x well you called me here to show them to me, so where is it?
    o ohhhh thats too bad, fedex JUST picked them up 5mins ago... im so sorry...
    x . . .

  11. #361
    Xtreme Addict
    Join Date
    Apr 2004
    Posts
    1,640
    Quote Originally Posted by prava View Post
    The biggest difference between ATI and NVIDIA, the way I see it at least, is that ATI is basing his cards on "logic" sp's, which means that if you are able to make and awesome driver, you get and awesome performance, but if you don't...well, you stick with your reals sp's (160 in RV770, 5 ways each so theorically this makes 800sp's). On the other hand, NVIDIA is not that reliant on drivers as their arquitecture is more "beasty": no logical sp's means that you can improve a few algorythms but that's all...

    So, ATI has a long way in order to make the RV870 as efficient as RV770. The only way we are seeing that poor scaling is because of that, they have to improve how the sp's work (in order to keep all of them feeded) and how to manage the new AF which is awesome but is showing as a HUGE problem (compared to older gpus in which AF has been nearly free).

    That could also explain why there isn't more difference between 5870 and 5850: the more SP's you have, the more you have to work on a driver so we could say that 5870 is gonna get a lot better than 5850 will (and 5850 is already and awesome product nontheless). Within 6 months time I expect 5870 to totally kill 4870X2 performance wise in ALL games/benchmarks.
    I agree completely. Just as you said, it's probably a lot harder for ATI to keep newer parts fed the wider their SP engine gets. I actually think their utilization goes down significantly as the SPs go up, resulting in poorer efficiency with each subsequent generation. It's why a GTX 295 with 1.8 TFLOPs is able to beat a 5870 with 2.7 TFLOPs. The amount of FLOPs matters even less than it used to, because it's all in how you use them.

    A GF100 might have 1.7 TFLOPs when it gets released, which is only about 60% more than a GTX 285 at 1 TFLOP (more if you're rounding, of course). However, 1/3 of the theoretical FLOP rate in G80-architecture-based cards goes almost entirely unused in real-world situations. The FLOP rate on GF100 is what it can achieve in real-world, and on top of that, it's more efficient than G80/GT200's baseline FLOP rate.

    AMD definitely has a ways to go with their drivers, as with a VLIW architecture they rely heavily on their compiler. Has anyone done any testing of AF performance specifically?
    Last edited by Cybercat; 10-03-2009 at 10:54 AM.
    DFI LANParty DK 790FX-B
    Phenom II X4 955 BE (1003GPMW) @ 3.8GHz (19x200) w/1.36v
    -cooling: Scythe Mugen 2 + AC MX-2
    XFX ATI Radeon HD 5870 1024MB
    8GB PC2-6400 G.Skill @ 800MHz (1:2) 5-5-5-15 w/1.8v
    Seagate 1TB 7200.11 Barracuda
    Corsair HX620W


    Support PC gaming. Don't pirate games.

  12. #362
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Lancaster, UK
    Posts
    473
    Quote Originally Posted by saaya View Post
    http://www.bit-tech.net/news/hardwar...-wasn-t-real/1

    tim says somebody showed him a picture of what is supposed to be a real fermi card, but he couldnt make out anything cause it was a mobile phone pic and basically more wires than pcb... sorry, i dont buy it...

    o check out our brand new fermi card! see! we DO have real next gen cards, and they are working fine!
    x huh? thats not a real card!
    o yes, this IS a real fermi *cough* card!
    x no its not! i hear something moving inside when shaking it!
    o ohhh i think you misunderstood me. I said its a real fermi prototype card... the retail cards will come later...
    x so you DONT have a real fermi card right now...
    o of course we do!
    x well you called me here to show them to me, so where is it?
    o ohhhh thats too bad, fedex JUST picked them up 5mins ago... im so sorry...
    x . . .
    Bittech is a very safe source, I dough they would risk reputation on little thing like this
    CPU: Intel 2500k (4.8ghz)
    Mobo: Asus P8P67 PRO
    GPU: HIS 6950 flashed to Asus 6970 (1000/1400) under water
    Sound: Corsair SP2500 with X-Fi
    Storage: Intel X-25M g2 160GB + 1x1TB f1
    Case: Sivlerstone Raven RV02
    PSU: Corsair HX850
    Cooling: Custom loop: EK Supreme HF, EK 6970
    Screens: BenQ XL2410T 120hz


    Help for Heroes

  13. #363
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by Calmatory View Post
    Still the power consumption must remain under 300 W in PCI SIG's internal testing, otherwise the card does not meet PCI-Express standards and hence can not be sold as PCI-Express compicant device. Oh and there is only talk about 6+8 pin in the standard, as far as I know.
    Bullcrap the MARS uses over 450W at peak. As mentioned, the PCI-e slot and the cables can supply way more power than they're rated for. If worse comes to worse, they just won't advertise it as PCI-e compliant

    I mean it is THEIR product, just because it doesn't pass PCI SIG testing, how can they prevent them from releasing it? Perhaps they will be paid off by nvidia.

    Quote Originally Posted by Cybercat View Post
    I agree completely. Just as you said, it's probably a lot harder to keep newer parts fed the wider their SP engine gets. I actually think the utilization goes down significantly as the SPs go up, resulting in poorer efficiency with each subsequent generation.
    That is a much larger problem for ATI than nvidia.
    Last edited by 003; 10-03-2009 at 10:52 AM.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  14. #364
    Xtreme Addict
    Join Date
    Apr 2004
    Posts
    1,640
    Quote Originally Posted by 003 View Post
    That is a much larger problem for ATI than nvidia.
    Er, I should have specified I meant ATI, but yes. I'll make an edit.
    DFI LANParty DK 790FX-B
    Phenom II X4 955 BE (1003GPMW) @ 3.8GHz (19x200) w/1.36v
    -cooling: Scythe Mugen 2 + AC MX-2
    XFX ATI Radeon HD 5870 1024MB
    8GB PC2-6400 G.Skill @ 800MHz (1:2) 5-5-5-15 w/1.8v
    Seagate 1TB 7200.11 Barracuda
    Corsair HX620W


    Support PC gaming. Don't pirate games.

  15. #365
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by 003 View Post
    Bullcrap the MARS uses over 450W at peak. As mentioned, the PCI-e slot and the cables can supply way more power than they're rated for. If worse comes to worse, they just won't advertise it as PCI-e compliant

    I mean it is THEIR product, just because it doesn't pass PCI SIG testing, how can they prevent them from releasing it? Perhaps they will be paid off by nvidia.
    Are you an idiot OR did you beknowingly miss my post which said
    Still the power consumption must remain under 300 W in PCI SIG's internal testing
    Also the PCI-E standard says:
    A single x16 card may now draw up to 300 W of power up from 225 W
    Every PCI-Express device MUST BE certified by PCI SIG, otherwise the device can not be sold as PCI-Express device.

    Whatever it is, upcoming Nvidia cards WILL NOT GO OVER 300 W on average. Period.

  16. #366
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by Bodkin View Post
    Bittech is a very safe source, I dough they would risk reputation on little thing like this
    oh dont get me wrong, i completely trust tim on this!!!
    its just that he said himself, he couldnt really tell much from the picture he saw, and it sounds like he wasnt 100% convinced that the picture he saw was the REAL REAL fermi card... otherwise he wouldnt even have mentioned that it was a blurry cell phone pic of lots of wires coming off a pcb... if hed really believe he had seen fermi in that moment, he would have written just that, that nvidia then DID show him a picture of the actual card...

    btw, a good friend apparently managed to get a copy of the picture in question and mailed it to me

    behold! this little puppy right here... this is fermi!



    Quote Originally Posted by Calmatory View Post
    Whatever it is, upcoming Nvidia cards WILL NOT GO OVER 300 W on average. Period.
    i dont think so either... power aside, the only way to cool 300W or even more is with a 3slot+ heatsink or with water... and thats just too expensive...
    Last edited by saaya; 10-03-2009 at 11:09 AM.

  17. #367
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    Quote Originally Posted by AMDDeathstar View Post
    Until you actually figure it out.I went old school and drew it out on graph paper.(The guys with CAD can confirm this.)

    With ATI Cypress at 334mm^2(18.2mmx18.2mm plus .8mm border for cutting(19mmx 19mm)) I get 164 canidates per wafer (41 per quarter).

    I took Nvidia's smallest estimated measurement floating around the web at 467 mm^2 (21.4mmx21.4.mm plus .6mm for border cuttting(22mmx22mm))30 per quarter . I get 120 canidate per wafer .

    Now even with 10% defect rate added only to ATI you still get 148 ATI to Nvidia's prefect 120 canidates per wafer.The number of gets smaller over 22mm x 22mm

    So ATI's faulty yield is 23% greater than Nvidia's perfect yield.

    I have no doubt Fermi is an awesome card.

    This may work for Tesla at $2000 a pop vs Firestream, but Geforce380GTX against Radeon 5870 the math for Nvidia's partners isn't so good.

    It'll come down to profit margins if Fermi will survives.
    Basicilly if you make Telsa cards you will be okay, selling GeForce will squeeze margins too tight.

    My question is what does BFG,eVGA,XFX and Zotec think Fermi will do for their profit margins.

    Nvidia's partners may have the world most powerful GPGPU lying at their feet, but if the partners can't profit from it,well?.

    That was an example of smallest Nvidia prefect yield against ATI imprefect scenario

    The real question is if Nvidia's partners will profit from this beast.
    I think Fermi to Geforce will hurt somebody's profit margin.
    Don't forget that yields decrease exponentially with size too... not linearly

    Either way you look at it, the math favors ATI on price, especially if the experience with 40nm isn't going pleasant for Nvidia

  18. #368
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by zerazax View Post
    Don't forget that yields decrease exponentially with size too... not linearly
    Source?

    Unless it's based on the fact that when the chip size grows, less chips fit on the wafer, AND when the chip size grows, there is higher chance to have a defect on the chip.

    So actually that makes some sense. Defect rate grows linearly, while chips per wafer decrease linearly. As these two add up, yields decrease expotentially. No?
    Last edited by Calmatory; 10-03-2009 at 11:20 AM.

  19. #369
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by Calmatory View Post
    Source?

    Unless it's based on the fact that when the chip size grows, less chips fit on the wafer, AND when the chip size grows, there is higher chance to have a defect on the chip.

    So actually that makes some sense. Defect rate grows linearly, while chips per wafer decrease linearly. As these two add up, yields decrease expotentially. No?
    afaik the amount of "holes" in the wafer is almost constant between a wafer with many small and few big chips... if a small chip that gets you 400 chips per wafer has a yield of 90% that means probably around 40+ defects per wafer (even if a chip gets struck it may still work with some redundant logic disabled or as a cut down version etc)

    if you go for a bigger chip that only fits around 100 times on the wafer like gt300, having 40 defects per wafer means you will have a yield of 60%+ plus because the bigger the chip the higher the chance two defects happen in the same chip.

    rv870 is 333mm^2 and rumored to have started with yields of ~60% it seems.
    im just guessing here, but 333mm^2 should mean they can get around 175 chips per wafer, so that means 105 functional chips which means 70+ defects. gt300 should be around 550mm^2 which means around 100 chips per wafer max, and with 70+ defects, 30+ fully functional chips.

    another factor is the bigger your chip, the more wafer space is wasted on the edges, but thats not a huge diference...

    wafer costs are around 3000-5000 us$, so 30 chips per wafer = 100-166$ per gpu, pure die costs

    for rv870 it should be around 100 chips per wafer so 30-50$ per chip cost...

    these numbers are just examples, they arent accurate...
    but you can see, for a rough 50% transistor increase of gt300 over rv870, the costs more or less tripple...
    Last edited by saaya; 10-03-2009 at 11:38 AM.

  20. #370
    Xtreme Member
    Join Date
    Mar 2009
    Location
    Miltown, Wisconsin
    Posts
    353
    Quote Originally Posted by Farinorco View Post
    At least talking about supporting technologies and so (the new C/C++ compiled code support, for example). Regarding computing power, I don't have it so clear.

    The only example that I have been able to find comparing GPGPU performance in equality of conditions (that includes both running the same code, so it has to be a DirectCompute or OpenCL piece), the HD5870 is pulverizing the GTX285.

    And the funniest part is that said example, is an NVIDIA demo of DirectCompute (run on both cards in AnandTech).

    To be exact, this one:



    But yes, I'm of the opinion that NVIDIA is one generation ahead when talking about GPGPUs technologies. We will see in terms of GPGPU performance when we have something else than this little demo to compare. I think it's a too specific program to make any conclusion from this.


    here too is a comparison to crunching on guru.

    http://www.guru3d.com/article/radeon...review-test/25

  21. #371
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    has anybody read the gt300 whitepaper?
    i just saw that nvidia said enabling ecc caused a performance drop of around 20% as a result of over bandwidth? :o
    I hope desktop gpus will have ecc disabled then!

  22. #372
    Xtreme Addict
    Join Date
    Jul 2009
    Posts
    1,023
    Quote Originally Posted by saaya View Post
    wafer costs are around 3000-5000 us$, so 30 chips per wafer = 100-166$ per gpu, pure die costs

    for rv870 it should be around 100 chips per wafer so 30-50$ per chip cost...
    AMD must be making a huge profit on the 5870's then, 5850's too.

  23. #373
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by Calmatory View Post
    Source?

    Unless it's based on the fact that when the chip size grows, less chips fit on the wafer, AND when the chip size grows, there is higher chance to have a defect on the chip.

    So actually that makes some sense. Defect rate grows linearly, while chips per wafer decrease linearly. As these two add up, yields decrease expotentially. No?
    wouldnt that be f(x)=2x? their is a lot more to it than that obviously. here is what dr who has to say about designing gpu's:
    Quote Originally Posted by Drwho? View Post
    With all due respect, I don't know if you ever tried to work on a puzzle that have more than 1 billion parts ... this is what GPU are ... so, it is always easy from your screen point of view to write those kind of statements, never the less, I am sure you never even completed a Lego with 100 000 parts ;-)
    Last edited by Chumbucket843; 10-03-2009 at 11:49 AM.

  24. #374
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by Farinorco View Post
    At least talking about supporting technologies and so (the new C/C++ compiled code support, for example). Regarding computing power, I don't have it so clear.

    The only example that I have been able to find comparing GPGPU performance in equality of conditions (that includes both running the same code, so it has to be a DirectCompute or OpenCL piece), the HD5870 is pulverizing the GTX285.

    And the funniest part is that said example, is an NVIDIA demo of DirectCompute (run on both cards in AnandTech).

    To be exact, this one:



    But yes, I'm of the opinion that NVIDIA is one generation ahead when talking about GPGPUs technologies. We will see in terms of GPGPU performance when we have something else than this little demo to compare. I think it's a too specific program to make any conclusion from this.
    1. I love how they leave out RV770, the real competition to GT200, and

    2. That is single precision. GT300 will decimate RV870 in double precision.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  25. #375
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Helloworld_98 View Post
    AMD must be making a huge profit on the 5870's then, 5850's too.
    With a kit cost of ~$70-80 yeah they are.
    Compared to Nvidia's G200 w/ a kit cost ~$120-130 and GF100 which should be upwards of $150, I think I guesstimated just shy of $200.

    Quote Originally Posted by 003 View Post
    1. I love how they leave out RV770, the real competition to GT200, and

    2. That is single precision. GT300 will decimate RV870 in double precision.
    As I have said before, DP is totally dependant on clockspeeds, if they hit their clock targets, they should have ~40% advantage over a 5870, if they have G200 like clocks, that can drop to a ~20% advantage.

    Then what if Cypress has more units on it for the 5890 plus a higher clockspeed?
    That could make it even closer.
    Last edited by LordEC911; 10-03-2009 at 11:55 AM.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

Page 15 of 42 FirstFirst ... 51213141516171825 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •