Page 1 of 2 12 LastLast
Results 1 to 25 of 34

Thread: Geforce 8900: an overclocked G80 that should give R600 a run for the money.

  1. #1
    Xtreme News
    Join Date
    Dec 2005
    Location
    California
    Posts
    1,594

    Geforce 8900: an overclocked G80 that should give R600 a run for the money.

    The title says it all, really. NVIDIA is planning on overclocking the G80 as far as they can get it, which alone wouldn't give you too much a performance increase. However, NVIDIA also has a figurative ace-in-the-hole, a driver tweak. What exactly this driver tweak does is unknown. Whatever it is, though, will give ATI some serious competition when they release their gigantic R600.
    http://www.techpowerup.com/?25258

  2. #2
    Xtreme Enthusiast
    Join Date
    Apr 2006
    Posts
    820
    lol 8900 overclocked G80 testing manufacturers RMA ?
    7900 anyone




    "WW3 will be fought with nukes, WW4 will be fought with rocks!":

  3. #3
    Xtreme Mentor
    Join Date
    Aug 2006
    Location
    HD0
    Posts
    2,646
    by driver tweak I assume they mean GDDR4 memory and a moderate OC...

    because people like use would just crack the drivers...

  4. #4
    YouTube Addict
    Join Date
    Aug 2005
    Location
    Klaatu barada nikto
    Posts
    17,574
    nice linkage, linking to a page that uses the Inq as a source. Can't say I believe it completely.
    Fast computers breed slow, lazy programmers
    The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
    http://www.lighterra.com/papers/modernmicroprocessors/
    Modern Ram, makes an old overclocker miss BH-5 and the fun it was

  5. #5
    Xtreme Enthusiast
    Join Date
    Feb 2006
    Posts
    976
    I remember unborn 7800 Ultra that known as 7800GTX 512MB ...
    here would be 8800 Ultra ...
    www.HotOverclock.com Founder & Editor in Chief
    ---------------------
    Upgrading to X58 ...

    Q6600 G0 @ 3.8GHz FSB 423MHz w/Swiftech H2O 120 Compact Dual 120mm fan Last OC Details Here
    abit IX38 QuadGT Rev1.0 (X38 Rev.01)
    4x1GB Patriot +SL PC2-6400 Based on Aeneon AET760UD00-25D Chips! @ 1:1 1000MHz 5-4-4-8 2.0v
    Sapphire Radeon HD 4850 A2 Normal Edition @ Core 820MHz Mem 2100MHz
    Display 19" LG FLATRON L1960TR LCD Contrast 3000:1 Response Time 2ms Lightning Fast!! w/f-Engine Chip and DFC Technology
    HDD 250GB Seagate Barracuda 7200.10 ST3250820AS
    HDD 120GB Maxtor Max.10 6L120M0

    PSU Zalman ZM1000-HP 1000 Watt Six 12v Rail and Dual Seperate Board
    A4Tech X7 Gaming Mouse X-750BF Dual Laser Engin w/2500DPI
    More about my Memory : it can do 950MHz 4-4-4-8 2.0v Stable and 1100MHz 5-7-7-12 2.2v Stable and 1200MHz 6-8-8-21 2.3v not Stable!

  6. #6
    Xtreme Member
    Join Date
    Mar 2006
    Location
    Ultra Clean Room
    Posts
    323
    the upgrading game between nVidia/ATI wiil return like this














    Last edited by milkcafe; 02-11-2007 at 02:13 AM.

  7. #7
    all outta gum
    Join Date
    Dec 2006
    Location
    Poland
    Posts
    3,390
    One of the comments to original article:
    i can guess what the driver tweak will be maybe it will be a working driver :roll:
    www.teampclab.pl
    MOA 2009 Poland #2, AMD Black Ops 2010, MOA 2011 Poland #1, MOA 2011 EMEA #12

    Test bench: empty

  8. #8
    Xtreme Addict
    Join Date
    Jan 2005
    Location
    Grand Forks, ND (Yah sure, you betcha)
    Posts
    1,266
    I still think they're holding off transition to a new process until 65nm.

    Who wants to bet a cookie that the driver tweak activates the 'missing mul' for general shading?

    G80's architecture is dual-issue scaler + MUL, yet the MUL is never issued. I don't know if it'll add (in some cases) 1/3 performance to G80, but it'll do something in the positive. I always figured this driver tweak would be for the 8800 when R600 is released. If they do not activate it on G80, but release the same silicon with it turned on and call it an 8900, I'd be royally pissed if I owned an 8800.

    About the driver tweak being one that actually makes the G80/SLI fully work with Vista, I find that rather amusing.
    Last edited by turtle; 02-11-2007 at 03:31 AM.
    That is all.

    Peace and love.

  9. #9
    Xtreme Addict
    Join Date
    Jan 2006
    Location
    Lithuania, Kaunas
    Posts
    1,313
    Quote Originally Posted by xoqolatl
    One of the comments to original article:


    Its written "overclocked G80" so I wouldnt expect GDDR4 or 80nm process. Im a bit disappointed because I was really hoping to see G81 on R600 arrival. I cant see why anybody would buy overclocked and probably overpriced G80.

  10. #10
    Xtreme X.I.P.
    Join Date
    Aug 2002
    Posts
    4,764
    The MUL theory is interesting. Other possibles apart from this and an overclock are whether the 8800GTX has all it's stream processors enabled to aid yields or whether another 32 ( pushing it up to 160 ) can be used in a pinch for a low volume very expensive part. That would assume some built in redundancy which of course is just a wild guess....

    You have to assume there is something here or there abouts to counter R600 which has to be quicker to stop the "ho-hum" factor.

    Of course the real counter would be to drop the 8800GTX price massively but they ain't going to do that unfortunately I think

    Regards

    Andy

  11. #11
    Xtreme Addict
    Join Date
    Dec 2005
    Posts
    1,035
    I was hoping to see 1Gb GDDR4, 512bit in the refresh :P Im asking too much i guess...

  12. #12
    Xtreme Addict
    Join Date
    Apr 2005
    Location
    Wales, UK
    Posts
    1,195
    If its an overclocked g80 it should be called 8800ultra or 8850. After all its just a high binned variant much like the X850XTPE was - a low volume part just so that they can appear competitive with the competition - they don't even have to ship that many as long as the benchmarks are out there. Most people would still go for the gt or gtx anyway if its only a matter of clockspeed.

    The g81 should get the 8900 tag.

  13. #13
    Xtreme Mentor
    Join Date
    Sep 2005
    Location
    Netherlands
    Posts
    2,693
    wel a while ago there was the rumor that Nvidia ordered 80nm G80 cores at the fab.

    COuld be those 80nm cores r meant for the refresh.

    like others i doubt this refresh will get the name 8900 if it is JUST a a new process.

    i guess its just something to compete the R600 and that they continue work on the G81 to fight the potential X2900.
    Time flies like an arrow. Fruit flies like a banana.
    Groucho Marx



    i know my grammar sux so stop hitting me

  14. #14
    Registered User
    Join Date
    Sep 2006
    Posts
    54
    Quote Originally Posted by BlackX


    Its written "overclocked G80" so I wouldnt expect GDDR4 or 80nm process. Im a bit disappointed because I was really hoping to see G81 on R600 arrival. I cant see why anybody would buy overclocked and probably overpriced G80.
    My thoughts exactly.
    E6550
    Foxconn P35A
    G.Skill DDR2-800 HK
    Sapphire 3870
    150 gig Raptor
    Corsair HX520

  15. #15
    Xtreme Addict
    Join Date
    Jan 2005
    Location
    Texas
    Posts
    1,017
    who didnt see this coming? Theyve had like a 6 month lead for some R&D to improve the g80 so thay can counter or beeotchslap R600 whenever it gets here.
    "Roses are FF0000, Violets are 0000FF, All my base are belong to you."
    Asus P6X58D-E
    I7 930 @ 3.9
    Asus 660ti DirectCU OC
    Antec Twelve-Hundred

  16. #16
    Xtreme Addict
    Join Date
    Jun 2004
    Location
    near Boston, MA, USA
    Posts
    1,955
    I would expect at least 80nm but better to just go for 65 if the fabs can make good yields and they get a working design. (those are big IF's I do realize). GDDR4 would help, fight back the superior 512 memory controller on the 600. But there is little chance they'd get a redesigned controller.

    As for added activation of features, that's normal for Nvidia. In fact there have been several first gen vs refresh feature issues in many prior years. So I'd expect them to do it again... Yes that doesn't bode well for the first adopter squad. But it is very normal for Nvidia to do.

    That 6 month lead time hasn't been enough for them to get a quality Vista driver, so I wouldn't hold my breath. They may very well just give the lead over to the 600 and let that be that, and worry about a revised chip in a few months. If you look at the stock vs the overclocked scores I think you'll find the clues. A mildly overclocked G80 "ultra" won't do it against the 600. They will need a solid 20-25% boost to hold the lead solidly. Even then they may not hold it 100% in every bench.

    Have to see. But 65 nm and faster GDDR4 should be a minimum to see the 8900 moniker. Things short of that will just be premium binned "ultra" parts and ho hum on those.

  17. #17
    Xtreme Addict
    Join Date
    Sep 2005
    Location
    Las Vegas, NV
    Posts
    1,771
    LOL... Werd™
    Main Rig: Intel Core i7 7700k @ 4.2GHz, 64GB of memory, 512GB m.2 SSD, nVidia GTX1080Ti
    NAS: QNAP TVS-1282, 8 x 4TB WD Golds(Main Storage Pool), 4 x 960GB M4 Crucial (VM Storage) , 2 x 512GB M.2 Caching
    Private Cloud: 4 Nodes (2 x Xeon 5645, 48GB DDR3 ECC/REG, 1 x 1TB HDD, 1 x 960GB SSD/Each)
    Distributed Encoding Cloud: 4 Nodes (2 x Xeon x5690, 24GB DDR3 ECC/REG, 1 x 128GB SSD/Each)
    Feedback
    EBAY:HEAT

  18. #18
    Xtreme Addict
    Join Date
    Apr 2004
    Posts
    1,640
    Quote Originally Posted by turtle
    I still think they're holding off transition to a new process until 65nm.

    Who wants to bet a cookie that the driver tweak activates the 'missing mul' for general shading?

    G80's architecture is dual-issue scaler + MUL, yet the MUL is never issued. I don't know if it'll add (in some cases) 1/3 performance to G80, but it'll do something in the positive. I always figured this driver tweak would be for the 8800 when R600 is released. If they do not activate it on G80, but release the same silicon with it turned on and call it an 8900, I'd be royally pissed if I owned an 8800.
    I was thinking about that actually. As you said, it's unknown if it will really give significant improvements, a MUL has less functionality than a MADD, it's not such a linear improvement as theoretical numbers make it out to be.

    I've read many highly technical threads on the G80, and to this day I still don't know how it's able to perform so well without that MUL. We're talking a highly efficient chip here (mmmm.....scalar). The only way I can figure is if you go by what some people in the industry have said, the G80 is closer to 90% utilization compared to about 60% of last generation GF7 architectures (I wish I had a link), plus about 40% more raw theoretical shading power than the best GF7 card (not including GX2), put that together that's 70% improvement at best. But that results from a lot of speculation. The 8800GTX still ends up nearly doubling 7900GTX performance, for which I guess added vertex and texture performance can be attributed.
    DFI LANParty DK 790FX-B
    Phenom II X4 955 BE (1003GPMW) @ 3.8GHz (19x200) w/1.36v
    -cooling: Scythe Mugen 2 + AC MX-2
    XFX ATI Radeon HD 5870 1024MB
    8GB PC2-6400 G.Skill @ 800MHz (1:2) 5-5-5-15 w/1.8v
    Seagate 1TB 7200.11 Barracuda
    Corsair HX620W


    Support PC gaming. Don't pirate games.

  19. #19
    Xtreme Addict
    Join Date
    Jan 2005
    Location
    Grand Forks, ND (Yah sure, you betcha)
    Posts
    1,266
    Quote Originally Posted by Cybercat
    I was thinking about that actually. As you said, it's unknown if it will really give significant improvements, a MUL has less functionality than a MADD, it's not such a linear improvement as theoretical numbers make it out to be.

    I've read many highly technical threads on the G80, and to this day I still don't know how it's able to perform so well without that MUL. We're talking a highly efficient chip here (mmmm.....scalar). The only way I can figure is if you go by what some people in the industry have said, the G80 is closer to 90% utilization compared to about 60% of last generation GF7 architectures (I wish I had a link), plus about 40% more raw theoretical shading power than the best GF7 card (not including GX2), put that together that's 70% improvement at best. But that results from a lot of speculation. The 8800GTX still ends up nearly doubling 7900GTX performance, for which I guess added vertex and texture performance can be attributed.

    All good points on possible reasoning. Although TMU's may definitely have something to do with it, as surely that butchered R580, I don't think the vertex performance does to that large of an extent.

    I too question how it performs so well when the GFLOP number, for instance, is substantially lower without the MUL. It actually puts it more on the level of a x1900, which makes it all the more amazing. True enough, G80 and a scaler architecture in general is much much much (much) more efficient than R580's (and of course G70's) setup, but still, can that alone justify the difference? I suppose the MUL is working, as B3D points out, doing 'interpolation and perspective correction', which I imagine also has something to do with it's efficiency (as I think people overlook the MUL is actually doing something), but I imagine the driver tweak to allow it to do general shading will allow for an improvement in overall performance in at least some scenarios,if not all. How much...Well, that's the question. Perhaps it will bring the over-all efficiency down when looking at it from a grand scheme because, as you said, a MUL is not as useful as say adding another scaler unit (MADD), but if it's there and can be used more efficiently in general shading than what it's doing now in some (or all) scenarios than doing what it is now, I imagine they'll use it for that. I truly do believe that is what the 'driver tweak' is; this is the ace up their sleeve. The question to me though, again, is 'How much will it matter?' Apparently Teh Inq, or perhaps nvidia, thinks quite a bit.
    Last edited by turtle; 02-11-2007 at 01:52 PM.
    That is all.

    Peace and love.

  20. #20
    Xtreme Addict
    Join Date
    Apr 2005
    Location
    Orange County, CA
    Posts
    1,544
    Quote Originally Posted by sladesurfer
    this doesnt sound promosing, its going to have a "special" driver Its probably just going to increase the clocks, voltage, and likeliness to fail.
    Current Setup:
    -9850 GX2's in Quad SLI config
    -Asus P5N32-SLI MB
    -2x512mb of PC2-5300 DDR2
    -Intel Celeron D OC'd to 3.2Ghz
    -Windows Me with XP theme
    -WD Caviar 20GB Hard Drive
    -Zip drive
    -Jazz drive
    -3.5" floppy drive
    -5.25" floppy drive

  21. #21
    Xtreme Mentor
    Join Date
    May 2005
    Location
    Westlake Village, West Hills
    Posts
    3,046
    Special driver? Sounds stupid and a waste of time. I think this GPU will hold me for a long time.
    PC Lab Qmicra V2 Case SFFi7 950 4.4GHz 200 x 22 1.36 volts
    Cooled by Swiftech GTZ - CPX-Pro - MCR420+MCR320+MCR220 | Completely Silent loads at 62c
    GTX 470 EVGA SuperClocked Plain stock
    12 Gigs OCZ Reaper DDR3 1600MHz) 8-8-8-24
    ASUS Rampage Gene II |Four OCZ Vertex 2 in RAID-0(60Gig x 4) | WD 2000Gig Storage


    Theater ::: Panasonic G20 50" Plasma | Onkyo SC5508 Processor | Emotiva XPA-5 and XPA-2 | CSi A6 Center| 2 x Polk RTi A9 Front Towers| 2 x Klipsch RW-12d
    Lian-LI HTPC | Panasonic Blu Ray 655k| APC AV J10BLK Conditioner |

  22. #22
    Xtreme Addict
    Join Date
    Jan 2006
    Location
    Lithuania, Kaunas
    Posts
    1,313
    Quote Originally Posted by Nanometer
    Special driver? Sounds stupid and a waste of time. I think this GPU will hold me for a long time.
    Sounds like stupid commercial tricks IMO.

  23. #23
    Xtreme Addict
    Join Date
    Apr 2004
    Posts
    1,640
    Quote Originally Posted by turtle
    All good points on possible reasoning. Although TMU's may definitely have something to do with it, as surely that butchered R580, I don't think the vertex performance does to that large of an extent.

    I too question how it performs so well when the GFLOP number, for instance, is substantially lower without the MUL. It actually puts it more on the level of a x1900, which makes it all the more amazing. True enough, G80 and a scaler architecture in general is much much much (much) more efficient than R580's (and of course G70's) setup, but still, can that alone justify the difference? I suppose the MUL is working, as B3D points out, doing 'interpolation and perspective correction', which I imagine also has something to do with it's efficiency (as I think people overlook the MUL is actually doing something), but I imagine the driver tweak to allow it to do general shading will allow for an improvement in overall performance in at least some scenarios,if not all. How much...Well, that's the question. Perhaps it will bring the over-all efficiency down when looking at it from a grand scheme because, as you said, a MUL is not as useful as say adding another scaler unit (MADD), but if it's there and can be used more efficiently in general shading than what it's doing now in some (or all) scenarios than doing what it is now, I imagine they'll use it for that. I truly do believe that is what the 'driver tweak' is; this is the ace up their sleeve. The question to me though, again, is 'How much will it matter?' Apparently Teh Inq, or perhaps nvidia, thinks quite a bit.
    A unit dedicated to perspective correction isn't even worth mentioning in the white papers, much less including in theoretical FLOP numbers, as many architectures particularly the R300 architecture have a dedicated perspective correction unit running in the background that ATI never bothered to include in the shader diagrams or in theoretical numbers when describing it to the press. It's essentially free in old architectures, suddenly by illuminating the MUL in the G80 and saying it's needed to come up with the theoretical shader numbers the G80 is capable of, NVIDIA is saying next gen means age-old routines like perspective correction are no longer "free". But in marketing, 518 GFLOPs looks a hell of a lot better than 345 GFLOPs.

    But this move by NVIDIA brings up an interesting take on what their opinion may be of the R600. It may show a lack of NVIDIA's confidence in the efficiency of the R600 as compared to the G80. There's a very good chance the R600 will use the same 4-way Vector shader structure of the R300 ancestry, which is a pretty tried and true architecture, but will prove dismal compared to a fully scalar architecture that NVIDIA is boasting. They'll have to use a mass amount of these shaders and a high clockspeed to create shader performance on par with NVIDIA. On paper, the theoretical numbers will actually be slightly higher (suggesting a clockspeed of slightly over 800MHz holds true), but in real world utilization NVIDIA will have the upper hand, which is crucial. Instead, the R600 will have to rely on other aspects to obtain the win over the G80, and that is raw fillrates, both pixel and texel, and memory bandwidth. That will give ATI a much needed boost in heavily bump-mapped scenarios, HDR and post-processing effects, anisotropic filtering, and anti-aliasing. Add all those together and they make up the bulk of what a graphics card has to do in modern games, and thus the R600 will have the edge.


    At least over the first generation of G80.
    DFI LANParty DK 790FX-B
    Phenom II X4 955 BE (1003GPMW) @ 3.8GHz (19x200) w/1.36v
    -cooling: Scythe Mugen 2 + AC MX-2
    XFX ATI Radeon HD 5870 1024MB
    8GB PC2-6400 G.Skill @ 800MHz (1:2) 5-5-5-15 w/1.8v
    Seagate 1TB 7200.11 Barracuda
    Corsair HX620W


    Support PC gaming. Don't pirate games.

  24. #24
    Muslim Overclocker
    Join Date
    May 2005
    Location
    Canada
    Posts
    2,786
    Nvidia had been going for a raw performer, not a feature oriented design. G80 mimicks that. If you ask me, if it wasn't for AVIVO, nvidia would have never cared to improve their decoding hardware/software.

    I sure hope Nvidia doesn't go out of their way to try and optimize their drivers some more by reducing the visual quality of the g80's output just for better FPS... If they do that who knows how long it will take me to recover from that stupidity.

    My watercooling experience

    Water
    Scythe Gentle Typhoons 120mm 1850RPM
    Thermochill PA120.3 Radiator
    Enzotech Sapphire Rev.A CPU Block
    Laing DDC 3.2
    XSPC Dual Pump Reservoir
    Primochill Pro LRT Red 1/2"
    Bitspower fittings + water temp sensor

    Rig
    E8400 | 4GB HyperX PC8500 | Corsair HX620W | ATI HD4870 512MB


    I see what I see, and you see what you see. I can't make you see what I see, but I can tell you what I see is not what you see. Truth is, we see what we want to see, and what we want to see is what those around us see. And what we don't see is... well, conspiracies.



  25. #25
    Xtreme Addict
    Join Date
    Jan 2005
    Location
    Grand Forks, ND (Yah sure, you betcha)
    Posts
    1,266
    Quote Originally Posted by Cybercat
    A unit dedicated to perspective correction isn't even worth mentioning in the white papers, much less including in theoretical FLOP numbers, as many architectures particularly the R300 architecture have a dedicated perspective correction unit running in the background that ATI never bothered to include in the shader diagrams or in theoretical numbers when describing it to the press. It's essentially free in old architectures, suddenly by illuminating the MUL in the G80 and saying it's needed to come up with the theoretical shader numbers the G80 is capable of, NVIDIA is saying next gen means age-old routines like perspective correction are no longer "free". But in marketing, 518 GFLOPs looks a hell of a lot better than 345 GFLOPs.

    But this move by NVIDIA brings up an interesting take on what their opinion may be of the R600. It may show a lack of NVIDIA's confidence in the efficiency of the R600 as compared to the G80. There's a very good chance the R600 will use the same 4-way Vector shader structure of the R300 ancestry, which is a pretty tried and true architecture, but will prove dismal compared to a fully scalar architecture that NVIDIA is boasting. They'll have to use a mass amount of these shaders and a high clockspeed to create shader performance on par with NVIDIA. On paper, the theoretical numbers will actually be slightly higher (suggesting a clockspeed of slightly over 800MHz holds true), but in real world utilization NVIDIA will have the upper hand, which is crucial. Instead, the R600 will have to rely on other aspects to obtain the win over the G80, and that is raw fillrates, both pixel and texel, and memory bandwidth. That will give ATI a much needed boost in heavily bump-mapped scenarios, HDR and post-processing effects, anisotropic filtering, and anti-aliasing. Add all those together and they make up the bulk of what a graphics card has to do in modern games, and thus the R600 will have the edge.


    At least over the first generation of G80.
    Very well put, and I agree on almost everything.

    If the tasks the MUL is currently working on are essentially free or were in past generations with significantly less power, and added to G80's spec just for a >500GFLOP 'checkbox' (there's a Geo line for ya), would it really require 173 Gflops for those tasks? I wouldn't think so. They've got to be cooking up some way to use it. It seems like such overkill for such simple operations.
    Last edited by turtle; 02-11-2007 at 08:49 PM.
    That is all.

    Peace and love.

Page 1 of 2 12 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •