Page 14 of 42 FirstFirst ... 41112131415161724 ... LastLast
Results 326 to 350 of 1035

Thread: The official GT300/Fermi Thread

  1. #326
    Xtreme Addict
    Join Date
    Jul 2009
    Posts
    1,023
    Quote Originally Posted by 003 View Post
    No, nvidia has already stated there will be. It most likely will be dual PCB. There's nothing wrong with that, the 7950GX2 and the original GTX295 were both dual PCB as well.

    If power is an issue (which it shouldn't be), there is always the possibility of having three power connectors rather than two, or slightly scaling down the chip, as with the GTX295.
    however the 7950 and GTX 275 have far lower power usage than the GF100 to begin with, the GF100 in Tesla form uses at least 225w, so in GTX form it's going to be 250w+, then double that amount, take into account the energy usage of PCB, and we're talking 450w+, so triple 8 pin, and then take into account the heat, the cooler will be huge, and could be triple slot or even longer than the 5870 X2.

  2. #327
    Xtreme Addict
    Join Date
    Apr 2004
    Posts
    1,640
    Quote Originally Posted by BeyondSciFi View Post
    The more or less obvious answer is that gaming performance is not solely based on raw GFLOPS output. So if the GFLOPS is the not the number we should be looking at, then what? [...] Thankfully, there is a way of finding the differing (if different) output between GPUs which is not arbitrary or somewhat subjective. The way to do this is by comparing another group of numbers, namely, the texture and pixel fillrates.

    [...]

    So, it seems the GTX 380 may be faster then the Radeon 5870 (with mature drivers, if not earlier). The GTX 380 (using the rumored specs) has 22% more texture fillrate and 14% more pixel fillrate. Given that the difference in performance is not too extreme, I suspect the GTX 380 will beat the Radeon 5870 in a majority of games and benchmarks but not necessarily all.
    I'm not sure anybody with a good head on their shoulders would submit to your logic. Up until the point you mentioned texture and pixel fillrates, you were spot on, and then...I don't know what happened. All that talk about core configurations and average game performance not being indicative of an overall performance picture, and then you go and compare fillrates? Like this is still the DX7 and 8 era?

    If fillrates were such an important factor, the 6800 Ultra wouldn't have kept up as well as it did, the 7900GTX should have done at least as well as the X1900XTX (if not better), and the 8800GTX would never have hit the 60%+ improvement over the 2900XT that it often did.

    Forget the fillrates, go back to what you were saying about games and drivers and different core configurations. There's too many factors to look at to really even speculate what the performance would be, but the best guesses are the ones that barely even touch fillrates.
    DFI LANParty DK 790FX-B
    Phenom II X4 955 BE (1003GPMW) @ 3.8GHz (19x200) w/1.36v
    -cooling: Scythe Mugen 2 + AC MX-2
    XFX ATI Radeon HD 5870 1024MB
    8GB PC2-6400 G.Skill @ 800MHz (1:2) 5-5-5-15 w/1.8v
    Seagate 1TB 7200.11 Barracuda
    Corsair HX620W


    Support PC gaming. Don't pirate games.

  3. #328
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    nvidia said there will be a dual gt300, but they never said when...
    it took quite some time until the gtx295 came out...
    ati has been the fastest to push dual gpu cards to the market with a new gen gpu, and even they need a few months for that every time...
    so definately dont expect a dual gt300 card anytime close to when single gt300 comes out, which isnt exactly soon either...

  4. #329
    Xtreme Addict
    Join Date
    Jul 2009
    Posts
    1,023
    Quote Originally Posted by saaya View Post
    nvidia said there will be a dual gt300, but they never said when...
    it took quite some time until the gtx295 came out...
    ati has been the fastest to push dual gpu cards to the market with a new gen gpu, and even they need a few months for that every time...
    so definately dont expect a dual gt300 card anytime close to when single gt300 comes out, which isnt exactly soon either...
    and by a long time that means by the time nvidia has done a die shrink to 32nm.

  5. #330
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by 003 View Post
    But don't forget. RV870 has had the ROPs and TMUs doubled to 32 and 80, respectively.

    GT200 already has 32 ROPs and 80 TMUs. GT300 will have 48 ROPs and 128 TMUs, and have SPs increased by 2.13x, and have SP efficiency increase in the process.
    What's the point? You can't compare processing units of different architectures that way. RV770 had 800 stream processors and GT200 had 240, and RV770 had not 3.3x the computing processing power of GT200. That's because differences in the way they work.

    Yeah, GT200 already has they number of TMUs and ROPs of Evergreen, but RV770 already had more SPs than what Fermi is going to have. So... so nothing because these comparisons are useless.

    You can compare with other things of same/similar architecture to try to predict the evolution (of course, assuming that even without per unit and per clock changes, the prediction may fail because bottlenecks, more/less eficient balancing of units, and so, and then may be architectural changes) but comparing units of G80 derivatives with R600 derivatives is kind of losing time.

  6. #331
    Xtreme Member
    Join Date
    Dec 2006
    Location
    Edmonton,Alberta
    Posts
    182
    Quote Originally Posted by Smartidiot89 View Post
    I've thaught about AMD having smaller dies for their GPUs but I don't think it necessarily meens better yields...

    While Nvidia have the uberhuge die, they have lower clock frequencies but looking at AMD we see the opposite, a small die and really high frequencies. I might actually go as far as saying the yielddifference between AMD and Nvidias topmodels at negligable?
    Until you actually figure it out.I went old school and drew it out on graph paper.(The guys with CAD can confirm this.)

    With ATI Cypress at 334mm^2(18.2mmx18.2mm plus .8mm border for cutting(19mmx 19mm)) I get 164 canidates per wafer (41 per quarter).

    I took Nvidia's smallest estimated measurement floating around the web at 467 mm^2 (21.4mmx21.4.mm plus .6mm for border cuttting(22mmx22mm))30 per quarter . I get 120 canidate per wafer .

    Now even with 10% defect rate added only to ATI you still get 148 ATI to Nvidia's prefect 120 canidates per wafer.The number of gets smaller over 22mm x 22mm

    So ATI's faulty yield is 23% greater than Nvidia's perfect yield.

    I have no doubt Fermi is an awesome card.

    This may work for Tesla at $2000 a pop vs Firestream, but Geforce380GTX against Radeon 5870 the math for Nvidia's partners isn't so good.

    It'll come down to profit margins if Fermi will survives.
    Basicilly if you make Telsa cards you will be okay, selling GeForce will squeeze margins too tight.

    My question is what does BFG,eVGA,XFX and Zotec think Fermi will do for their profit margins.

    Nvidia's partners may have the world most powerful GPGPU lying at their feet, but if the partners can't profit from it,well?.

    That was an example of smallest Nvidia prefect yield against ATI imprefect scenario

    The real question is if Nvidia's partners will profit from this beast.
    I think Fermi to Geforce will hurt somebody's profit margin.

  7. #332
    Xtreme Addict
    Join Date
    Apr 2006
    Posts
    2,462
    Quote Originally Posted by tajoh111 View Post
    Looks like NV has customers already for Fermi.
    Which should be due to the fact that GPGPU on ATI pretty much sucks. Well, I think I'm exaggerating here but NVIDIA clearly has a lead there.
    Notice any grammar or spelling mistakes? Feel free to correct me! Thanks

  8. #333
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by FischOderAal View Post
    Which should be due to the fact that GPGPU on ATI pretty much sucks. Well, I think I'm exaggerating here but NVIDIA clearly has a lead there.
    At least talking about supporting technologies and so (the new C/C++ compiled code support, for example). Regarding computing power, I don't have it so clear.

    The only example that I have been able to find comparing GPGPU performance in equality of conditions (that includes both running the same code, so it has to be a DirectCompute or OpenCL piece), the HD5870 is pulverizing the GTX285.

    And the funniest part is that said example, is an NVIDIA demo of DirectCompute (run on both cards in AnandTech).

    To be exact, this one:



    But yes, I'm of the opinion that NVIDIA is one generation ahead when talking about GPGPUs technologies. We will see in terms of GPGPU performance when we have something else than this little demo to compare. I think it's a too specific program to make any conclusion from this.
    Last edited by Farinorco; 10-03-2009 at 03:06 AM.

  9. #334
    Xtreme Addict
    Join Date
    Sep 2008
    Location
    Downunder
    Posts
    1,313
    A little bit of info on the 3 cards planned to be released (eventually): http://www.fudzilla.com/content/view/15795/1/

  10. #335
    Xtreme Addict
    Join Date
    Jul 2009
    Posts
    1,023
    Quote Originally Posted by randomizer View Post
    A little bit of info on the 3 cards planned to be released (eventually): http://www.fudzilla.com/content/view/15795/1/
    I want to believe that since fudzilla is probably on the payroll from nvidia, but then I saw the dual GPU Fermi which is supposed to consume under 300w and be released in 2009, yeah right.

  11. #336
    Xtreme Enthusiast
    Join Date
    Feb 2009
    Posts
    531
    Quote Originally Posted by Cybercat View Post
    I'm not sure anybody with a good head on their shoulders would submit to your logic. Up until the point you mentioned texture and pixel fillrates, you were spot on, and then...I don't know what happened. All that talk about core configurations and average game performance not being indicative of an overall performance picture, and then you go and compare fillrates? Like this is still the DX7 and 8 era?

    If fillrates were such an important factor, the 6800 Ultra wouldn't have kept up as well as it did, the 7900GTX should have done at least as well as the X1900XTX (if not better), and the 8800GTX would never have hit the 60%+ improvement over the 2900XT that it often did.

    Forget the fillrates, go back to what you were saying about games and drivers and different core configurations. There's too many factors to look at to really even speculate what the performance would be, but the best guesses are the ones that barely even touch fillrates.
    The biggest difference between ATI and NVIDIA, the way I see it at least, is that ATI is basing his cards on "logic" sp's, which means that if you are able to make and awesome driver, you get and awesome performance, but if you don't...well, you stick with your reals sp's (160 in RV770, 5 ways each so theorically this makes 800sp's). On the other hand, NVIDIA is not that reliant on drivers as their arquitecture is more "beasty": no logical sp's means that you can improve a few algorythms but that's all...

    So, ATI has a long way in order to make the RV870 as efficient as RV770. The only way we are seeing that poor scaling is because of that, they have to improve how the sp's work (in order to keep all of them feeded) and how to manage the new AF which is awesome but is showing as a HUGE problem (compared to older gpus in which AF has been nearly free).

    That could also explain why there isn't more difference between 5870 and 5850: the more SP's you have, the more you have to work on a driver so we could say that 5870 is gonna get a lot better than 5850 will (and 5850 is already and awesome product nontheless). Within 6 months time I expect 5870 to totally kill 4870X2 performance wise in ALL games/benchmarks.

  12. #337
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by 003 View Post
    No, nvidia has already stated there will be. It most likely will be dual PCB. There's nothing wrong with that, the 7950GX2 and the original GTX295 were both dual PCB as well.

    If power is an issue (which it shouldn't be), there is always the possibility of having three power connectors rather than two, or slightly scaling down the chip, as with the GTX295.
    The problem is that PCI-E standard allows a card to use up to 300 W of power. Adding more power connectors won't help. If the card uses more than 300 W in PCI-SIG's internal testing(which is done for every PCI-E product, they don't use furmark ), it won't be PCI-E certified and thus can not be sold as PCI-E product.

  13. #338
    Xtreme Addict
    Join Date
    Jul 2009
    Posts
    1,023
    Quote Originally Posted by Calmatory View Post
    The problem is that PCI-E standard allows a card to use up to 300 W of power. Adding more power connectors won't help. If the card uses more than 300 W in PCI-SIG's internal testing(which is done for every PCI-E product, they don't use furmark ), it won't be PCI-E certified and thus can not be sold as PCI-E product.
    untrue, otherwise the mars wouldn't be on sale.

  14. #339
    Xtreme Cruncher
    Join Date
    Jun 2006
    Location
    On top of a mountain
    Posts
    4,163
    Quote Originally Posted by Calmatory View Post
    The problem is that PCI-E standard allows a card to use up to 300 W of power. Adding more power connectors won't help. If the card uses more than 300 W in PCI-SIG's internal testing(which is done for every PCI-E product, they don't use furmark ), it won't be PCI-E certified and thus can not be sold as PCI-E product.
    Quote Originally Posted by Helloworld_98 View Post
    untrue, otherwise the mars wouldn't be on sale.
    MFRs just need to get a little sneaky is all

    Bring back the Turbo button Just like getting a Hotrod through Emissions...PCI-SIG test passes then a little softmod (read: Nitrous) and [Voice="Billy Mays"]Ka-Boom![/Voice]
    20 Logs on the fire for WCG: i7 920@2.8 X3220@3.0 X3220@2.4 E8400@4.05 E6600@2.4

  15. #340
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Growing rift between desktop and GPU...
    can somebody from industry explain how they're able to scale graphics memory so high, especially GDDR5.

    desktop
    DDRAM 266-400
    DDR2 533-1066
    DDR3 1066-2000
    mem cell clock rate improving marginally, especially recently - 200, 266, 250Mhz

    GPU world
    GDDR 400-750
    GDDR3 800-2200
    GDDR5 3600-5500
    mem cell clock rate 400, 550, 687Mhz??

    Furthermore, speeds of GDDR3 doubled from ~1000 for 6800/X800 in '04 to 2200 for 9800GTX 5 years later.
    4870 launched just recently with 3600. 5870 already uses 5000. And Samsung 7000 chips already entering production: 224GB/s 256bit, up to 336GB/s 384bit for GT300!
    THATS CRAZY REMARKABLE PROGRESS IN 5 SHORT YEARS SINCE 6800 LAUNCH.

    Maybe AMD should start using GDDR3 or GDDR5 for desktop. With a couple PC3-1600 DIMM, you only get 25.6GB/s. 5870's GDDR5 would give you 80GB/s - 3x more.

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  16. #341
    Xtreme Addict
    Join Date
    Sep 2008
    Location
    Downunder
    Posts
    1,313
    Quote Originally Posted by Helloworld_98 View Post
    I want to believe that since fudzilla is probably on the payroll from nvidia
    Na, Fuad just reports every tiny rumour as fact, even if 10 minutes later another rumour contradicts it (which he will of course post as fact also).

  17. #342
    Xtreme Enthusiast
    Join Date
    Jul 2004
    Posts
    535
    Quote Originally Posted by Farinorco View Post
    But yes, I'm of the opinion that NVIDIA is one generation ahead when talking about GPGPUs technologies. We will see in terms of GPGPU performance when we have something else than this little demo to compare. I think it's a too specific program to make any conclusion from this.
    I would say half generation instead of full generation, seeing as RV870 looks to be a better GPGPU than GT200, while GT300 looks to be a better GPGPU than RV870.

  18. #343
    Xtreme Addict
    Join Date
    Jul 2009
    Posts
    1,023
    Quote Originally Posted by ***Deimos*** View Post
    Growing rift between desktop and GPU...
    can somebody from industry explain how they're able to scale graphics memory so high, especially GDDR5.

    desktop
    DDRAM 266-400
    DDR2 533-1066
    DDR3 1066-2000
    mem cell clock rate improving marginally, especially recently - 200, 266, 250Mhz

    GPU world
    GDDR 400-750
    GDDR3 800-2200
    GDDR5 3600-5500
    mem cell clock rate 400, 550, 687Mhz??

    Furthermore, speeds of GDDR3 doubled from ~1000 for 6800/X800 in '04 to 2200 for 9800GTX 5 years later.
    4870 launched just recently with 3600. 5870 already uses 5000. And Samsung 7000 chips already entering production: 224GB/s 256bit, up to 336GB/s 384bit for GT300!
    THATS CRAZY REMARKABLE PROGRESS IN 5 SHORT YEARS SINCE 6800 LAUNCH.

    Maybe AMD should start using GDDR3 or GDDR5 for desktop. With a couple PC3-1600 DIMM, you only get 25.6GB/s. 5870's GDDR5 would give you 80GB/s - 3x more.
    DDR and GDDR aren't the same, and GDDR4, GDDR5 and GDDR6 are quad data rate hence why 5GHz is so easy to reach.

  19. #344
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Quote Originally Posted by prava View Post
    The biggest difference between ATI and NVIDIA, the way I see it at least, is that ATI is basing his cards on "logic" sp's, which means that if you are able to make and awesome driver, you get and awesome performance, but if you don't...well, you stick with your reals sp's (160 in RV770, 5 ways each so theorically this makes 800sp's). On the other hand, NVIDIA is not that reliant on drivers as their arquitecture is more "beasty": no logical sp's means that you can improve a few algorythms but that's all...

    So, ATI has a long way in order to make the RV870 as efficient as RV770. The only way we are seeing that poor scaling is because of that, they have to improve how the sp's work (in order to keep all of them feeded) and how to manage the new AF which is awesome but is showing as a HUGE problem (compared to older gpus in which AF has been nearly free).

    That could also explain why there isn't more difference between 5870 and 5850: the more SP's you have, the more you have to work on a driver so we could say that 5870 is gonna get a lot better than 5850 will (and 5850 is already and awesome product nontheless). Within 6 months time I expect 5870 to totally kill 4870X2 performance wise in ALL games/benchmarks.
    I dunno what to think anymore. Its way far more complex nowadays then back in TNT2 or Geforce1 days. AMD almost certainly using different code paths for R7 and R8 since architecture improvements.
    * For all we know, there could be hardware issues that impact shader efficiency (newer doesnt always mean better).
    Ofcourse, all R7 optimizations might not apply or haven't been added.
    * HQ AF could also be killing performance - who knows - its always on, so you cant test without to see impact.
    * But, looking back at remarkable driver improvements on GeForce2, how X800 3DMark and BF2 performance climbed over the years, and ofcourse fantastic CF scaling improvements on 4870's - one thing's for sure - 5870 will get better with age.

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  20. #345
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Quote Originally Posted by Helloworld_98 View Post
    DDR and GDDR aren't the same, and GDDR4, GDDR5 and GDDR6 are quad data rate hence why 5GHz is so easy to reach.
    I never compared DDR to GDDR directly. I was highlighting the percentage differences between generations. Just look at the ratios. Scaling of desktop RAM seems to have hit a "266" wall.

    200 (nForce2 DDR400) -> 266 (Typical highend Core2 DDR2-1066) -> 250 (i7 DDR3-2000) is a modest mem cell clock rate improvement on desktop

    375 (9800XT GDDR) -> 550 (9800GTX GDDR3) -> 687 (5.5Ghz GDDR5). That's 47% and 25% improvements so far... 7Ghz GDDR5 will make it 59% faster than GDDR3.

    Look at this way.
    9700pro was first 256bit card, with earth shattering 20GB/s.
    Fastest most expensive X800XTPE had 38GB/s with GDDR3.
    4 years later, April 08, 9800GTX is pushing 70GB/s still with GDDR3.
    Remarkably, a few months later, June08, 4870 brings 110GB/s, and about a year later, 5870 takes it up a notch to 150GB/s.

    Yet, these are all 256bit. It took many many years to go from 20 to 70. But in a a little over a year we're already at 150. GDDR5 rocks! See what I'm saying.

    [And I'm not even considering the incredible progress of going to 384, 512 or even 1024 bit memory]

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  21. #346
    Xtreme Addict
    Join Date
    Sep 2008
    Location
    Downunder
    Posts
    1,313
    I think it would be funny if NVIDIA shipped A1 silicon to get this thing out on the market by Christmas.

  22. #347
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by randomizer View Post
    A little bit of info on the 3 cards planned to be released (eventually): http://www.fudzilla.com/content/view/15795/1/
    So, a full version of the chip, a harvested version of the chip, and a dual card. Let's say: GTX380, GTX360, and GTX395? I think that's much clearer to make an idea comparing to the GT200 chip, and they will probably be the final comercial names of the three products.

    I didn't expect otherwise, sincerelly. The only difference against previous generation launch would be the dual card, but it is a predictable thing since they are going to be late to the market this time. By the time of the launch, ATi will be already selling their X2 card, so they couldn't leave it for much longer.

  23. #348
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by Helloworld_98 View Post
    untrue, otherwise the mars wouldn't be on sale.
    What's exactly special about MARS?

    http://www.xbitlabs.com/news/video/d...522115634.html

    There you go. PCI-E 1.1 was standardized to be capable of up to 300W per slot(75 W from the PCI-E slot, rest from the power connectors), and PCI-E 2.0 did not change anything in regards of power draw.

    And more here, PCI-E 2.0 standard: http://www.10stripe.com/featured/qui...xpress-2-0.php
    A single x16 card may now draw up to 300 W of power (75 W from the slot itself, 150 W from an 8-pin PEG connector, 75 W from a second PEG connector), up from 225 W (75 W from the slot, 75 W each from 2 6-pin PEG connectors) or originally 150 W (75 W from the slot, 75 W from a 6-pin PEG connector).
    I am very sure that MARS did not exceed 300 W power consumption in PCI SIG's tests. I have no clue about their testing methods, and I wouldn't be surprised if they just ran 3DMark06 and took the average consumption. Or then used their very own testing software and testbed.
    Last edited by Calmatory; 10-03-2009 at 07:35 AM.

  24. #349
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Smartidiot89 View Post
    I've thaught about AMD having smaller dies for their GPUs but I don't think it necessarily meens better yields...

    While Nvidia have the uberhuge die, they have lower clock frequencies but looking at AMD we see the opposite, a small die and really high frequencies. I might actually go as far as saying the yielddifference between AMD and Nvidias topmodels at negligable?
    It pretty much does, if you are getting ~30% yields when you will be harvesting the other cores you are doing pretty good. Seeing how RV770 was in the 80-90% range, that is pretty dang good. RV870 was around 60% when it first got ramped, so things will only improve.

    Quote Originally Posted by Vit^pr0n View Post
    I think we pretty much know the 5870x2 will be faster just by specs alone.

    Judging by revealed specs and BeyondSciFi's post, we can assume this is going to be a repeat of the previous gen performance ( GTX260 = 4870, GTX280 = 20-30% faster than 4870 )

    Yes, there are arch changes in the GT300. However, most of it is for non gaming applications from what's been revealed. The gaming side of it is almost like the 5800 series: Double everything ( Though in the GTX380's case, not everything is doubled )
    The thing he didn't take into effect is clockspeeds, the clocks on wiki are wrong. I will be surprised if they get higher than G200 clocks, at launch.

    Quote Originally Posted by randomizer View Post
    I think it would be funny if NVIDIA shipped A1 silicon to get this thing out on the market by Christmas.
    Not going to happen with 10% yields. They didn't even have enough silicon for their PR event...
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  25. #350
    Xtreme Guru
    Join Date
    Jan 2005
    Location
    Tre, Suomi Finland
    Posts
    3,858
    Quote Originally Posted by ***Deimos***
    Maybe AMD should start using GDDR3 or GDDR5 for desktop. With a couple PC3-1600 DIMM, you only get 25.6GB/s. 5870's GDDR5 would give you 80GB/s - 3x more.
    Answer this:
    Would you choose 1GB of unnecessarily fast (as system RAM) GDDR5 memory, OR 4-8GB of plain-jane DDR3?
    No need to answer, you would go for capacity.

    And why should the HW makers even bother considering?
    System memory bandwidth has basically no effect on system performance. Capacity, or rather the lack of it, has HUGE negative effect. Then there's the huge bunch of issues starting with cost difference between DDR & GDDR, multiple times more complex buswidth, capacity as mentioned, power-thermals and motherboard trace lengths.
    Quote Originally Posted by Helloworld_98 View Post
    GDDR4, GDDR5 (...) are quad data rate hence why 5GHz is so easy to reach.
    No, they both are dual data rate, just like all DDR before them.
    and GDDR6
    GDDR6? What's that?
    You were not supposed to see this.

Page 14 of 42 FirstFirst ... 41112131415161724 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •