Page 2 of 49 FirstFirst 1234512 ... LastLast
Results 26 to 50 of 1220

Thread: Nvidia confirms the GTX 580

  1. #26
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    so the gtx580 is a rebrand??? LOLL
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

  2. #27
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    Quote Originally Posted by Hey Zeus View Post
    How soon how everyone forgets about the 512sp GTX 480

    http://www.geeks3d.com/20100810/gefo...-with-furmark/



    Dual GF104 will be a 495 and not the 580
    Not anymore it won't. AMD started their "next gen", NVidia pretty much have to call anything new the 5 series to stop the idiots working at stores like best buy from saying "well, this is a new series, while those are last years parts".

    At any rate, if this card is based on the GF104 design and not the GF100, it could be a real winner. After all, it was the non-gaming enhancements to the core that made the GF100 so power hungry... After all, what does a gaming gpu need the ability for ECC memory for?

    A 512 shader part based on that design should, theoretically, use less power than the 480 shader enabled GTX 480.
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  3. #28
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    Quote Originally Posted by Hey Zeus View Post
    How soon how everyone forgets about the 512sp GTX 480

    http://www.geeks3d.com/20100810/gefo...-with-furmark/



    Dual GF104 will be a 495 and not the 580

    unless nvidia goes the rebrand route ... LOLL


    Quote Originally Posted by DilTech View Post
    Not anymore it won't. AMD started their "next gen", NVidia pretty much have to call anything new the 5 series to stop the idiots working at stores like best buy from saying "well, this is a new series, while those are last years parts".

    At any rate, if this card is based on the GF104 design and not the GF100, it could be a real winner. After all, it was the non-gaming enhancements to the core that made the GF100 so power hungry... After all, what does a gaming gpu need the ability for ECC memory for?

    A 512 shader part based on that design should, theoretically, use less power than the 480 shader enabled GTX 480.

    so because amd has a new gen stuff and if nvidia still uses gpu's based on the gf100 series ... they can change the name without architecture mods and be ok with it without being burned to stick ???


    its a rebrand if they do this .... nothing else ... and i hope everyone who said amd did one for the 6k series will cry foul when this happens
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

  4. #29
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    Quote Originally Posted by Sn0wm@n View Post
    unless nvidia goes the rebrand route ... LOLL





    so because amd has a new gen stuff and if nvidia still uses gpu's based on the gf100 series ... they can change the name without architecture mods and be ok with it without being burned to stick ???


    its a rebrand if they do this .... nothing else ... and i hope everyone who said amd did one for the 6k series will cry foul when this happens
    Frankly speaking, from a business stand point NVidia have no choice but to call it a new generation, as it'd be suicide in the common market not to do so. Like I said, you'd have idiots at best buy and such telling customers to buy the 6870 over the GTX 485(if they called it this) because the 6870 is "next gen". Also, if the GTX 580 is based off the GF110, which is what current information says, then that makes it an entirely new chip. An entirely new chip gives it every right to be called a new generation as AMD have to call the 6870 a new generation, no?

    Both companies have done rebrands, and I think it's stupid in all cases. In fact, I vote against it with my wallet. I don't buy re-brand parts, and unless they just win over all in bang for buck at the price point my friends are shopping at I will not suggest them either.

    That said... I heavily doubt the GTX 580 is a fully unlocked GF100. Too power hungry, too hard to produce in large numbers, and frankly speaking the performance difference isn't anywhere close to enough to make the difference needed to take on cayman... It would literally be pointless.

    Now, if they base it off the GF104 and just expand on it, which wouldn't be difficult at all... They can use the extra transistors they save from removing the monster double precision performance and the extra additions to the memory controller for ECC memory support and put that into either expanding the memory controller(which there have been rumors of them going this route) or they could just go at 512-shader 384-bit and crank those clockspeeds. It really wouldn't take that much more die-space considering the GF104 actually has 384 shaders on it to begin with. They could actually go higher than 512 and keep a 256bit memory bus if they really wanted to.

    The options are still strong for NVidia, as much as people on here like to think things are doom and gloom here, they aren't. The question is, what route will they go? There's ones that are clearly the best choice of action, and ones that are liable to blow up in there face. For all of OUR sake, we should hope they choose the best course of action, as that will lead to all of us winning.

    Remember, if the GTX 580 is a monster, cayman will have to be cheaper. If the GTX 580 doesn't win, we get stuck seeing above MSRP Cayman cards.
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  5. #30
    Xtreme Member
    Join Date
    Oct 2005
    Posts
    197
    Is this like when Nvidia renamed the GTX380 to 480? They didn't know ATi was gonnago ahaead and refresh their products once a year

  6. #31
    Xtreme Member
    Join Date
    Apr 2006
    Location
    los angeles
    Posts
    387
    idk why anyone on xtremesystems complains about heat...
    someone enlighten me?
    Seti@Home Optimized Apps
    Heat
    Quote Originally Posted by aNoN_ View Post
    pretty low score, why not higher? kingpin gets 40k in 3dmark05 and 33k in 06 and 32k in vantage performance...

  7. #32
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    GF110 chip is a rumor right now until nvidia make an official statement ... because we heard so much rumors about its wild specs ... so all i can see is another rebrand .. and whatever you might want to call it it still is a rebrand ... amd shrank the die size while still getting the same performance as last gen stuff ... so they must have done alot more then just changing the name because the market asked for ....

    i do agree that nvidia still has some strong plays for 40nm ... but if they call it the 500 series .. its a dumb marketing move that people should cry foul for ... dual gf104 wouldnt be a bad idea .... or a modded gf104 would be good also ...
    Last edited by Sn0wm@n; 10-22-2010 at 06:48 PM.
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

  8. #33
    Xtreme Guru
    Join Date
    Jun 2010
    Location
    In the Land down -under-
    Posts
    4,452
    to buy thre 580 or wait for cayman XT

    Another thing I find funny is AMD/Intel would snipe any of our Moms on a grocery run if it meant good quarterly results, and you are forever whining about what feser did?

  9. #34
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    "Remember, if the GTX 580 is a monster, cayman will have to be cheaper. If the GTX 580 doesn't win, we get stuck seeing above MSRP Cayman cards. "


    You do realize that the 580 is going to be a novelty, right..? With absolutely no significant use, or placement. It's existence is to save face but it won't be cost/performance orientated. Honestly, what do u think the 580 will cost?

  10. #35
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by Sn0wm@n View Post
    unless nvidia goes the rebrand route ... LOLL





    so because amd has a new gen stuff and if nvidia still uses gpu's based on the gf100 series ... they can change the name without architecture mods and be ok with it without being burned to stick ???


    its a rebrand if they do this .... nothing else ... and i hope everyone who said amd did one for the 6k series will cry foul when this happens
    I agree with Diltech in basically this 5 series name is being forced by AMD hand. It would be harder to sell a 4 series when AMD has released released its 6 series parts.

    If we look at the juniper to barts and gf100--> gf104 technology, their is probably more differences in the architecture of gf104 than barts shift.

    GF104 has a change in ratio of everything. 48 Shader blocks instead of 32, significantly different TMU ratio changes, ROP ratio is different.

    Gf100

    * 16 CUDA cores (#1)
    * 16 CUDA cores (#2)
    * 16 Load/Store Units
    * 16 Interpolation SFUs (not on NVIDIA's diagrams)
    * 4 Special Function SFUs
    * 4 Texture Units

    GF104
    * 16 CUDA cores (#1)
    * 16 CUDA cores (#2)
    * 16 CUDA cores (#3)
    * 16 Load/Store Units
    * 16 Interpolation SFUs (not on NVIDIA's diagrams)
    * 8 Special Function SFUs
    * 8 Texture Units

    "Ultimately superscalar execution serves 2 purposes on GF104: to allow it to issue instructions to the 3rd CUDA core block with only 2 warps in flight, and to improve overall efficiency. In a best-case scenario GF104 can utilize 4 of 7 execution units, while GF100 could only utilize 2 of 6 execution units."

    It also lowered the fp performance in favor of decreased size like barts.

    Anandtech, even said part of the gtx 460 naming is conservative.

    "Given these differences, we’re a bit dumbfounded by the naming. With the differences in memory and the differences in the ROP count, the two GTX 460 cards are distinctly different. If NVIDIA changed the clockspeeds in the slightest, we’d have the reincarnation of the GTX 275 and GTX 260. NVIDIA’s position is that the cards are close enough that they should have the same name, but this isn’t something we agree with. One of these cards should have had a different model number – probably the 768MB card with something like the GTX 455. The 1GB card does not eclipse the 768MB card, but this is going to lead to a lot of buyer confusion. The best GTX 460 is not the $199 one."

    If AMD didn't change their series to 6xxx, Nv probably wouldn't have gone with a gtx 5xx name change.

    http://www.anandtech.com/show/3987/a...range-market/2

    If you look at what changes they made to the actual chips, NV did more with with gf100 --> 104 changes than AMD did with cypress technology to barts. Barts seem more about rebalancing cypress and adding better tesselation performance.

    A gf110 based on gf104 technology(instead of gf100) is just as much of a change as juniper to barts. Not performance wise but architecturally.

    Not saying it would be right but they almost have to because AMD has released their 6xxx series.

    I think everyone will be seriously impressed though considering how many problems gf100 has(its kind of a mutt of an architecture, a cgpu that does games), if gf110 is able to get 20-30% more performance out fermi architecture on 40nm(for the same or lower power usage). Honestly most people have low expectations for fermi on this process, including myself and as result, a g110 that is able to take cayman honestly would be a pleasant surprise for everyone. Because with performance per watt and performance per die size they are pretty big losers to everyone except benchers.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  11. #36
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    Quote Originally Posted by Xoulz View Post
    "Remember, if the GTX 580 is a monster, cayman will have to be cheaper. If the GTX 580 doesn't win, we get stuck seeing above MSRP Cayman cards. "


    You do realize that the 580 is going to be a novelty, right..? With absolutely no significant use, or placement. It's existence is to save face but it won't be cost/performance orientated. Honestly, what do u think the 580 will cost?
    We won't know what the 580 is until more information is released. That includes it's market position(other than the fact that it will be high end) as well as it's pricing.

    If I'm right, and the GTX 580 is GF104 based, it'd have a smaller die size at 512 shaders than the GTX 480, and a much smaller die size than the GTX 480 if they stay at a 256-bit memory bus. It wouldn't be too hard to price it competitively to cayman, which isn't exactly going to be a small die itself. Please also, bear in mind that the GF104 is quite a bit more efficient than the GF100 chip... As such, a GTX 580 based on this design would, infact, be to the GTX 480 what the 6870 is to the 5870 in terms of differences, except it'd be FASTER than it's predecessor.

    How much is the cayman xt suppose to cost?

    Think logically about this... Or am I the only one who WANTS to see a price war happen?
    Last edited by DilTech; 10-22-2010 at 07:17 PM.
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  12. #37
    Xtreme Enthusiast
    Join Date
    Sep 2007
    Location
    Jakarta, Indonesia
    Posts
    924
    Quote Originally Posted by Xoulz View Post
    "Remember, if the GTX 580 is a monster, cayman will have to be cheaper. If the GTX 580 doesn't win, we get stuck seeing above MSRP Cayman cards. "


    You do realize that the 580 is going to be a novelty, right..? With absolutely no significant use, or placement. It's existence is to save face but it won't be cost/performance orientated. Honestly, what do u think the 580 will cost?
    Basically, the reason why we see AMD GPUs throughout late 2009-2010 offer less value compared to the period before it, are that the lack of supply from TSMC's 40 nm process, out of low yielding and perhaps either lack of foresight from AMD regarding demand forecasting or some shady behind the curtain deal beetween certain "parties" to limit AMD's wafer allocation.

    If they learned from the lesson and TSMC's 40 nm capacity does increase & can meet the demand from AMD, IMHO it would be FLAT OUT STUPID from AMD side not to try to be more aggressive in winning market share while optimising profit at the same time. Why ? They have COST LEADERSHIP per unit made against competitor.

    I'm not suggesting AMD to enter a silly price war (though that might be a great boon for us consumers for short term gain) against nVidia, but they do now have more FLEXIBILITY thus the opportunity to control the battle situation to their favor. Winning the war for marketshare is very, very important especially in the graphic world where performance leadership changes from time to time. AMD doesn't have the cushion of ultra profitable professional market dominance, so dominating the consumer market graphic is very vital for their longterm well being.

  13. #38
    Xtremely Retired OC'er
    Join Date
    Dec 2006
    Posts
    1,084
    another 20 pages, how god create universe...

  14. #39
    Xtreme Enthusiast
    Join Date
    Sep 2008
    Location
    ROMANIA
    Posts
    687
    If GF110(GTX 580) is based on GF 104 arhitecture, i mean 48SP/SIM(not 32/SIM), it's not posible to have 512 SP, only 480/528/576SP....
    If we gather GF 104(GTX 460)+ GF 106(GTS 450) we have 336SP+ 192SP= 528SP. Die size = 332mm^+ 240mm^= 572mm^ but this is just to estimate better, this are united , one chip, one Imc, probably will have 529mm^ as GTX 480. Also if we gather the TDP's, 150W+ 106W= 256W.
    It may be a 576 SP card(this sounds much better) because GF 104 in same die-size coud have ben full -384SP.
    Anyway, 528 SP sounds a little better than 512 SP.
    My bet is 528/576 SP, 320/384 bytes, 1.20/1.5GB, 800/4000, TDP 230-240W(better GF110 thermals, and now Nvidia i think is capable of doing better chips(new silicon revizion).
    And GTX 580 is coming...And also a full GF 104. This year...
    Last edited by xdan; 10-23-2010 at 12:56 AM.
    i5 2500K@ 4.5Ghz
    Asrock P67 PRO3


    P55 PRO & i5 750
    http://valid.canardpc.com/show_oc.php?id=966385
    239 BCKL validation on cold air
    http://valid.canardpc.com/show_oc.php?id=966536
    Almost 5hgz , air.

  15. #40
    Banned
    Join Date
    Jan 2003
    Location
    EU
    Posts
    318
    If nvidia had a new chip on the table right now, wouldnt they be fanfaring about something"spectacular" left and right for some time now ? They hyped GF100 from like november last year, there were architecture previews, some wooden cards, annoucments etc half a year before product launch.
    I mean if thats a NEW part due out this year, there would be SOME info about it, no ?
    Im personnaly expecting fully enabled GF100 on a mature process, they must be getting some good chips now with less leakage.Maybe something like C2 to C3 move on AMD 45nm process.
    As to the point of calling it 580 because fermi is last years part, no logic in there, cause fermi is THIS years part, just like AMD 6xxx series.

  16. #41
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    Quote Originally Posted by tajoh111 View Post
    I agree with Diltech in basically this 5 series name is being forced by AMD hand. It would be harder to sell a 4 series when AMD has released released its 6 series parts.

    If we look at the juniper to barts and gf100--> gf104 technology, their is probably more differences in the architecture of gf104 than barts shift.

    GF104 has a change in ratio of everything. 48 Shader blocks instead of 32, significantly different TMU ratio changes, ROP ratio is different.

    Gf100

    * 16 CUDA cores (#1)
    * 16 CUDA cores (#2)
    * 16 Load/Store Units
    * 16 Interpolation SFUs (not on NVIDIA's diagrams)
    * 4 Special Function SFUs
    * 4 Texture Units

    GF104
    * 16 CUDA cores (#1)
    * 16 CUDA cores (#2)
    * 16 CUDA cores (#3)
    * 16 Load/Store Units
    * 16 Interpolation SFUs (not on NVIDIA's diagrams)
    * 8 Special Function SFUs
    * 8 Texture Units

    "Ultimately superscalar execution serves 2 purposes on GF104: to allow it to issue instructions to the 3rd CUDA core block with only 2 warps in flight, and to improve overall efficiency. In a best-case scenario GF104 can utilize 4 of 7 execution units, while GF100 could only utilize 2 of 6 execution units."

    It also lowered the fp performance in favor of decreased size like barts.

    Anandtech, even said part of the gtx 460 naming is conservative.

    "Given these differences, we’re a bit dumbfounded by the naming. With the differences in memory and the differences in the ROP count, the two GTX 460 cards are distinctly different. If NVIDIA changed the clockspeeds in the slightest, we’d have the reincarnation of the GTX 275 and GTX 260. NVIDIA’s position is that the cards are close enough that they should have the same name, but this isn’t something we agree with. One of these cards should have had a different model number – probably the 768MB card with something like the GTX 455. The 1GB card does not eclipse the 768MB card, but this is going to lead to a lot of buyer confusion. The best GTX 460 is not the $199 one."

    If AMD didn't change their series to 6xxx, Nv probably wouldn't have gone with a gtx 5xx name change.

    http://www.anandtech.com/show/3987/a...range-market/2

    If you look at what changes they made to the actual chips, NV did more with with gf100 --> 104 changes than AMD did with cypress technology to barts. Barts seem more about rebalancing cypress and adding better tesselation performance.

    A gf110 based on gf104 technology(instead of gf100) is just as much of a change as juniper to barts. Not performance wise but architecturally.

    Not saying it would be right but they almost have to because AMD has released their 6xxx series.

    I think everyone will be seriously impressed though considering how many problems gf100 has(its kind of a mutt of an architecture, a cgpu that does games), if gf110 is able to get 20-30% more performance out fermi architecture on 40nm(for the same or lower power usage). Honestly most people have low expectations for fermi on this process, including myself and as result, a g110 that is able to take cayman honestly would be a pleasant surprise for everyone. Because with performance per watt and performance per die size they are pretty big losers to everyone except benchers.


    so nvidia changed more in GF104 then amd did in barts ???? LOLL ....


    and let's blame amd for the rebrand this round too ....

    anything more to add ????
    Last edited by Sn0wm@n; 10-23-2010 at 06:04 AM.
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

  17. #42
    Xtreme Mentor
    Join Date
    Jan 2009
    Location
    Oslo - Norway
    Posts
    2,879
    We are not getting new GPus in this round, neither from AMD or nVidia. That would require a shrink (to 22nm) that has been delayed and screwed up both of them. These new cards are not rebrand either, there are refresh based on the "old" architecture.

    Therefore, the flexibility of "old" architecture is going to very important in the fight between upcoming high-end cards (Cayman vs GTX 580, or whatever they call then), in my opinion.

    ASUS P8P67 Deluxe (BIOS 1305)
    2600K @4.5GHz 1.27v , 1 hour Prime
    Silver Arrow , push/pull
    2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
    GTX560 GB OC @910/2400 0.987v
    Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
    CM Storm Scout + Corsair HX 1000W
    +
    EVGA SR-2 , A50
    2 x Xeon X5650 @3.86GHz(203x19) 1.20v
    Megahalem + Silver Arrow , push/pull
    3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
    XFX GTX 295 @650/1200/1402
    Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
    SilverStone Fortress FT01 + Corsair AX 1200W

  18. #43
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    no new gpus from amd ??? LOLL


    modified architecture .... 35% smaller die still the same performance ... lower tdp .. and its the same ... LOLLL
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

  19. #44
    Xtreme Mentor
    Join Date
    Jan 2009
    Location
    Oslo - Norway
    Posts
    2,879
    Quote Originally Posted by Sn0wm@n View Post
    no new gpus from amd ??? LOLL


    modified architecture .... 35% smaller die still the same performance ... lower tdp .. and its the same ... LOLLL
    This is a "refresh", based on the "old" 40nm. Of course, both AMD and nVidia are going to tweak the "old" architecture for better PPP (price, performance, power usage), and the flexibility of "old" architecture will decide the outcome for upcoming "refreshes" .

    No reason to LOL on others ideas. Even if you don't catch them, or are different than yours. remember, he who LOL's last LOLs best. LOL

    ASUS P8P67 Deluxe (BIOS 1305)
    2600K @4.5GHz 1.27v , 1 hour Prime
    Silver Arrow , push/pull
    2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
    GTX560 GB OC @910/2400 0.987v
    Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
    CM Storm Scout + Corsair HX 1000W
    +
    EVGA SR-2 , A50
    2 x Xeon X5650 @3.86GHz(203x19) 1.20v
    Megahalem + Silver Arrow , push/pull
    3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
    XFX GTX 295 @650/1200/1402
    Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
    SilverStone Fortress FT01 + Corsair AX 1200W

  20. #45
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Good to see DilTech posting again, pretty much lifting an entire thread out of the dumpster by some good posting rather then ignorant fanboys ruining it. Finally some posts that are actually worth reading.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  21. #46
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    Quote Originally Posted by DilTech View Post
    Not anymore it won't. AMD started their "next gen", NVidia pretty much have to call anything new the 5 series to stop the idiots working at stores like best buy from saying "well, this is a new series, while those are last years parts".

    At any rate, if this card is based on the GF104 design and not the GF100, it could be a real winner. After all, it was the non-gaming enhancements to the core that made the GF100 so power hungry... After all, what does a gaming gpu need the ability for ECC memory for?

    A 512 shader part based on that design should, theoretically, use less power than the 480 shader enabled GTX 480.
    Really? Then why do the 68xx series thrash the GTX460 for less power?
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

  22. #47
    Xtreme Addict
    Join Date
    Mar 2005
    Location
    Rotterdam
    Posts
    1,553
    Tbh if nvidia had a gf104 chip with some tweks and just plain stacking up of cuda cores I dont see the problem of calling it 5xx series. Both amd and nvidia had problems with 32nm transition and its normal that these next batch of cards be not revolutionary.
    As long as they can keep up with Cayman they will be fine and Im quite certain they can provide a competitor to Cayman if they work on their gold gf104 chip.
    Gigabyte Z77X-UD5H
    G-Skill Ripjaws X 16Gb - 2133Mhz
    Thermalright Ultra-120 eXtreme
    i7 2600k @ 4.4Ghz
    Sapphire 7970 OC 1.2Ghz
    Mushkin Chronos Deluxe 128Gb

  23. #48
    Xtreme Member
    Join Date
    Dec 2009
    Posts
    188
    Is the GTX 580 going to be a dual GPU card?
    My System

    Core i7 970 @ 4.0Ghz
    Asus P6X58D Mobo
    6GB DDR3 Corsair 1600 Memory
    1000watt Corsair PSU
    Windows 7 64bit
    EVGA GTX 670 SC 4GB

  24. #49
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    Quote Originally Posted by Sam_oslo View Post
    This is a "refresh", based on the "old" 40nm. Of course, both AMD and nVidia are going to tweak the "old" architecture for better PPP (price, performance, power usage), and the flexibility of "old" architecture will decide the outcome for upcoming "refreshes" .

    No reason to LOL on others ideas. Even if you don't catch them, or are different than yours. remember, he who LOL's last LOLs best. LOL
    so there's an old and a new 40nm .... LOLL


    refresh???? rumors or facts .... and what are your proof of such claims .. other then some unproven slide ...

    amd did tweak their old architecture alot ... but nvidia didnt yet ... it still needs to be proven ....



    Quote Originally Posted by Dimitriman View Post
    Tbh if nvidia had a gf104 chip with some tweks and just plain stacking up of cuda cores I dont see the problem of calling it 5xx series. Both amd and nvidia had problems with 32nm transition and its normal that these next batch of cards be not revolutionary.
    As long as they can keep up with Cayman they will be fine and Im quite certain they can provide a competitor to Cayman if they work on their gold gf104 chip.

    but putting 2 GF104 with minor rework to stack them up would still mean GF104 ..... so its a rebrand if they call it gt5xx ...



    Quote Originally Posted by Tim View Post
    Good to see DilTech posting again, pretty much lifting an entire thread out of the dumpster by some good posting rather then ignorant fanboys ruining it. Finally some posts that are actually worth reading.


    really ???? and what would that be
    Last edited by Sn0wm@n; 10-23-2010 at 07:58 AM.
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

  25. #50
    Xtreme Enthusiast
    Join Date
    Feb 2005
    Posts
    970
    [edit]

    ah forget it
    Last edited by flippin_waffles; 10-23-2010 at 08:56 AM.

Page 2 of 49 FirstFirst 1234512 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •