Page 9 of 49 FirstFirst ... 678910111219 ... LastLast
Results 201 to 225 of 1220

Thread: Nvidia confirms the GTX 580

  1. #201
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    Quote Originally Posted by Sn0wm@n View Post
    proof of that wild and unfouded statement
    It was my jab at the comment that the 6xxx series had little to no changes and didnt deserve a new generational name. Funny how it feels when its the other way around

    (and actually, there is some foundation, read the b3d leaks some users have been giving when the wild rumors about a 512-bit 768 core GTX 580 was being broadcast)

  2. #202
    Xtreme Enthusiast
    Join Date
    Sep 2007
    Location
    Jakarta, Indonesia
    Posts
    924
    Quote Originally Posted by trinibwoy View Post
    I must have entered the twilight zone. Last I checked nVidia was blamed for everything around here including the recession and crying babies. Where did this "can do no wrong" joke come from?
    You haven't been around here as much lately mate, it's kinda funny now the victim mentality & attitude has shift to the green side by some peeps. I understand your disgust regarding the pitty toward AMD as the underdog in the graphic bussiness, but atleast nVidia has been leading for quite long time, now the table has been turned for just one generation, the exact opposite stuffs started to come out of closet. Don't you know that nVidia's in desperate mode ?? That would be the noble reason for renaming & rebranding if that tactic really happens (yet again) this time.

    Quote Originally Posted by trinibwoy View Post
    A new name doesn't always bring generational change. See 2900XT->HD3870 and 8800GTX->9800GTX. I'm looking forward to seeing this 580. It could answer the question of whether the architecture is inherently over-engineered or if GF100 was just an inefficient implementation.
    Yes ofcourse you're right, and the opposite can happen too, such as G71 to G80, R200 to R300, that's quite a common knowledge for layman people who are not ignorant enough to check their fact first. But just look at a few thread regarding Bart launch and GTX 580 speculation here, you just have to find out the hilarity stuff yourself regarding my joke before. (hint: it comes from a guy originated in one of northern europe country).

    EDIT:

    On a serious note, checking the heatsink under the shroud, i can't help the thought of its similarity with one used in R600, IIRC & CMIIW.

    Judging from that pic, does anybody have any clue or theory regarding its capacity & capability vs one applied in GTX 480 ?? Those heatpipe looks bulky though, while the dissipating area seems limited, can higher volume & pressure of moving air overcome that particular problem ?

    nVidia knows they can get away with GTX 480 classleading noise, i think they still have the room to up the decibel rate before fanbois turn over their noise cancelling headphone for good.
    Last edited by spursindonesia; 10-28-2010 at 05:58 PM.

  3. #203
    Xtreme Addict
    Join Date
    Mar 2005
    Location
    Rotterdam
    Posts
    1,553
    Quote Originally Posted by tajoh111 View Post
    Rumors are suggesting this is a 512 shader card with 128TMU, which in my opinions is not going to give it a clear advantage over cayman.

    If that picture is true, I am happy they got rid of the hole in the card.
    My friend it will take a very very great green magic hat to pull off a card that has "a clear advantage over Cayman".

    I'd be happy if they achieve anything thats simply competitive.
    Gigabyte Z77X-UD5H
    G-Skill Ripjaws X 16Gb - 2133Mhz
    Thermalright Ultra-120 eXtreme
    i7 2600k @ 4.4Ghz
    Sapphire 7970 OC 1.2Ghz
    Mushkin Chronos Deluxe 128Gb

  4. #204
    Xtreme Enthusiast
    Join Date
    Sep 2007
    Location
    Jakarta, Indonesia
    Posts
    924
    Quote Originally Posted by Dimitriman View Post
    My friend it will take a very very great green magic hat to pull off a card that has "a clear advantage over Cayman".

    I'd be happy if they achieve anything thats simply competitive.
    Actually, i think a full 512 SP chip with added TMUs of 128 would be a plenty decent chip if nVidia's engineer can pull it off. At least it would fit into Bart's description of reworking the chip design while the underlying mArch stays the same. No matter the end result is, from engineering stand point, that's quite an achievement & i respect that.

    Now, if it's really just a respinned GF100 with exact amount of units & fully copied mArch, i'm gonna LOL, hard.

  5. #205
    Xtreme Addict
    Join Date
    Mar 2005
    Location
    Rotterdam
    Posts
    1,553
    Quote Originally Posted by spursindonesia View Post
    Actually, i think a full 512 SP chip with added TMUs of 128 would be a plenty decent chip if nVidia's engineer can pull it off. At least it would fit into Bart's description of reworking the chip design while the underlying mArch stays the same. No matter the end result is, from engineering stand point, that's quite an achievement & i respect that.

    Now, if it's really just a respinned GF100 with exact amount of units & fully copied mArch, i'm gonna LOL, hard.
    yes and i mean with all the changes you mentioned it will still be highly unlikely that 580 walks over cayman. Remember cayman will have 1920 4d shaders with ever further improvement to tesselation than barts as well as other architectural changes and 6gb/s memory.
    Last edited by Dimitriman; 10-28-2010 at 06:16 PM.
    Gigabyte Z77X-UD5H
    G-Skill Ripjaws X 16Gb - 2133Mhz
    Thermalright Ultra-120 eXtreme
    i7 2600k @ 4.4Ghz
    Sapphire 7970 OC 1.2Ghz
    Mushkin Chronos Deluxe 128Gb

  6. #206
    Xtreme Addict
    Join Date
    Sep 2010
    Location
    Australia / Europe
    Posts
    1,310
    Quote Originally Posted by Sn0wm@n View Post
    ohh kung fu panda gpu :O
    That's right, and if any nvidia fanbois know, kung fu panda kicked ass although it was fat, lazy, and hungry as hell
    do the analogy

  7. #207
    Xtreme Member
    Join Date
    Oct 2009
    Posts
    241
    Quote Originally Posted by Dimitriman View Post
    yes and i mean with all the changes you mentioned it will still be highly unlikely that 580 walks over cayman. Remember cayman will have 1920 4d shaders with ever further improvement to tesselation than barts as well as other architectural changes and 6gb/s memory.
    And only 1 gig of ram to stop the parade i hope thats not true.
    .:. Obsidian 750D .:. i7 5960X .:. EVGA Titan .:. G.SKILL Ripjaws DDR4 32GB .:. CORSAIR HX850i .:. Asus X99-DELUXE .:. Crucial M4 SSD 512GB .:.

  8. #208
    Xtreme Enthusiast
    Join Date
    Sep 2007
    Location
    Jakarta, Indonesia
    Posts
    924
    Quote Originally Posted by Dimitriman View Post
    yes and i mean with all the changes you mentioned it will still be highly unlikely that 580 walks over cayman. Remember cayman will have 1920 4d shaders with ever further improvement to tesselation than barts as well as other architectural changes and 6gb/s memory.
    Nothing's set in stone, especially Cayman, AMD's secrecy has been top notch so far. One thing that i learned with Bart launch, how royally sucking i am playing the layman mArch speculation game.

    It is what it is, i play my expectation low regarding Cayman, it might turn out to be a flop (wow, me being pessimist !! world is near its end ). Especially if the underlying mArch has changed, with that 4D shader array rumor, it won't be an easy job for AMD driver team to extract the optimum performance out of this new baby early on.

    We'll see.

  9. #209
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    Quote Originally Posted by Dimitriman View Post
    yes and i mean with all the changes you mentioned it will still be highly unlikely that 580 walks over cayman. Remember cayman will have 1920 4d shaders with ever further improvement to tesselation than barts as well as other architectural changes and 6gb/s memory.
    6gbps memory ??? :O

    Quote Originally Posted by kuroikenshi View Post
    That's right, and if any nvidia fanbois know, kung fu panda kicked ass although it was fat, lazy, and hungry as hell
    do the analogy


    LOLL .... nice one mate
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

  10. #210
    Xtreme Addict
    Join Date
    Mar 2005
    Location
    Rotterdam
    Posts
    1,553
    Quote Originally Posted by Sn0wm@n View Post
    6gbps memory ??? :O





    LOLL .... nice one mate


    official info so far on cayman - yes 1gb sucks but there should be plenty of partners making 2gb versions.
    Gigabyte Z77X-UD5H
    G-Skill Ripjaws X 16Gb - 2133Mhz
    Thermalright Ultra-120 eXtreme
    i7 2600k @ 4.4Ghz
    Sapphire 7970 OC 1.2Ghz
    Mushkin Chronos Deluxe 128Gb

  11. #211
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    cayman is starting to sound more and more epic

    thanks for the slide
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

  12. #212
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by Dimitriman View Post
    My friend it will take a very very great green magic hat to pull off a card that has "a clear advantage over Cayman".

    I'd be happy if they achieve anything thats simply competitive.
    I entirely agree, which is why if I buy a card this generation, it will likely be a cayman card. The best NV can do at the moment is reduce power consumption so that they can make a full gtx 512 with more TMUs(and the best this can do is match cayman). They might have added more shaders, but rumors are saying otherwise.

    Nvidia fermi chip needs a huge redesign. If they can keep the performance but knock off 33% power consumption I would be happier than a 25% faster, 15 percent power increase. Fermi should never consume more power than a 5970 and well it does, which to me, simply shocks me considering its significantly slower and crap out at high resolutions even though it has more vram. They need to scale down power someway so this architecture has legs. Because trying to scale up when your limited by power and heat, just limited how fast the chip is and it stops the architecture from getting more complex and denser which is noeded for the future. 33% is pretty much impossible unless they improve their shader efficiency vastly and reduce the size of a chip. It would almost take a miracle to get done since fermi was design to be a cgpu more than anything else. At this point, Nvidia can't have everything, they cannot have a great cGPu architecture and a great gaming architecture, doing both makes the chips larger than they are and decreases efficiency of the shader in games. Nvidia needs to either focus on gaming or make two separate architectures if they want to get competitive on die size/performance per watt with AMD.

    The hype is getting ridiculous and I don't think any card can live up to the r300 or g80. R300 smashed the geforce 4 which was a great card to begin with. Beating fermi is crazy expected because it is simply not that good a card. If this card is as large as the rumors have lead us to believe, I don't think cayman xt can be slower than fermi, unless they really really botch something up.
    Last edited by tajoh111; 10-28-2010 at 06:51 PM.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  13. #213
    Xtreme Addict
    Join Date
    Mar 2005
    Location
    Rotterdam
    Posts
    1,553
    Quote Originally Posted by tajoh111 View Post
    I entirely agree, which is why if I buy a card this generation, it will likely be a cayman card. The best NV can do at the moment is reduce power consumption so that they can make a full gtx 512 with more TMUs(and the best this can do is match cayman). They might have added more shaders, but rumors are saying otherwise.

    Nvidia fermi chip needs a huge redesign. If they can keep the performance but knock off 33% power consumption I would be happier than a 25% faster, 15 percent power increase. Fermi should never consume more power than a 5970 and well it does, which to me, simply shocks me considering its significantly slower and crap out at high resolutions even though it has more vram. They need to scale down power someway so this architecture has legs. Because trying to scale up when your limited by power and heat, just limited how fast the chip is and it stops the architecture from getting more complex and denser which is noeded for the future. 33% is pretty much impossible unless they improve their shader efficiency vastly and reduce the size of a chip. It would almost take a miracle to get done since fermi was design to be a cgpu more than anything else. At this point, Nvidia can't have everything, they cannot have a great cGPu architecture and a great gaming architecture, doing both makes the chips larger than they are and decreases efficiency of the shader in games. Nvidia needs to either focus on gaming or make two separate architectures if they want to get competitive on die size/performance per watt with AMD.
    i agree and I also said many times that I believe nvidia can make something very powerful by scaling up the gf104 chip. whatever changes they made with that chip worked very well and instead of scaling up the power humgry gf100 they should scale 104 to 512 cuda and 128 tmu. that should help keeping the power in range and will make a very fast card.
    Gigabyte Z77X-UD5H
    G-Skill Ripjaws X 16Gb - 2133Mhz
    Thermalright Ultra-120 eXtreme
    i7 2600k @ 4.4Ghz
    Sapphire 7970 OC 1.2Ghz
    Mushkin Chronos Deluxe 128Gb

  14. #214
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    how the hell will nvidia pull off 30% less power while adding more performance???


    isnt there such things as physics limitation in this world of ours
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

  15. #215
    Xtreme Addict
    Join Date
    May 2005
    Posts
    1,656
    Quote Originally Posted by Sn0wm@n View Post
    how the hell will nvidia pull off 30% less power while adding more performance???


    isnt there such things as physics limitation in this world of ours
    Did you even consider Intel and AMD reducing power consumption and increasing performance with newer steppings, it happens all the time.
    Work Rig: Asus x58 P6T Deluxe, i7 950 24x166 1.275v, BIX2/GTZ/D5
    3x2048 GSkill pi Black DDR3 1600, Quadro 600
    PCPower & Cooling Silencer 750, CM Stacker 810

    Game Rig: Asus x58 P6T, i7 970 24x160 1.2v HT on, TRUE120
    3x4096 GSkill DDR3 1600, PNY 660ti
    PCPower & Cooling Silencer 750, CM Stacker 830

    AMD Rig: Biostar TA790GX A2+, x4 940 16x200, stock hsf
    2x2gb Patriot DDR2 800, PowerColor 4850
    Corsair VX450

  16. #216
    Xtreme Member
    Join Date
    Jun 2005
    Posts
    442
    Quote Originally Posted by Dimitriman View Post


    official info so far on cayman - yes 1gb sucks but there should be plenty of partners making 2gb versions.
    Don't forget, the HD 4870 512mb walked all over the 8800 GTX 768mb and 9800 GTX+ card. Plus, it matched (or exceeded) the performance of the GTX 260, which had 896mb.

    Overall GPU memory means very little in overall card performance. It's all about the implementation of the architecture in relation to the available frame buffer. AMD/ATI seems to do a really good job of utilizing the available memory on each card to its best potential. The performance may be pretty damn good.
    PII 965BE @ 3.8Ghz /|\ TRUE 120 w/ Scythe Gentle Typhoon 120mm fan /|\ XFX HD 5870 /|\ 4GB G.Skill 1600mhz DDR3 /|\ Gigabyte 790GPT-UD3H /|\ Two lovely 24" monitors (1920x1200) /|\ and a nice leather chair.

  17. #217
    Xtreme Member
    Join Date
    Jun 2005
    Posts
    442
    Quote Originally Posted by Sn0wm@n View Post
    how the hell will nvidia pull off 30% less power while adding more performance???
    How the hell does the 2011 Hyundai Sonata get the same gas mileage as a current model Toyota Corolla? And yet the Sonata is much larger and has a lot more power under the hood.

    Technology advances. However, a 30% performance increase while utilizing less power does sound a little far fetched.
    PII 965BE @ 3.8Ghz /|\ TRUE 120 w/ Scythe Gentle Typhoon 120mm fan /|\ XFX HD 5870 /|\ 4GB G.Skill 1600mhz DDR3 /|\ Gigabyte 790GPT-UD3H /|\ Two lovely 24" monitors (1920x1200) /|\ and a nice leather chair.

  18. #218
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    Quote Originally Posted by highoctane View Post
    Did you even consider Intel and AMD reducing power consumption and increasing performance with newer steppings, it happens all the time.

    i did make my comment because tahoe seemed to be dreaming ...


    how can nvidia reduce power by 33% while retaining the same performance ... on the same chip on a simple respin ...

    Quote Originally Posted by Mad Pistol View Post
    How the hell does the 2011 Hyundai Sonata get the same gas mileage as a current model Toyota Corolla? And yet the Sonata is much larger and has a lot more power under the hood.

    Technology advances. However, a 30% performance increase while utilizing less power does sound a little far fetched.


    LOL not the same thing .. cars vs gpu ...




    but your right that it looks farfetched alot
    Last edited by Sn0wm@n; 10-28-2010 at 07:06 PM.
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

  19. #219
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by Sn0wm@n View Post
    how the hell will nvidia pull off 30% less power while adding more performance???


    isnt there such things as physics limitation in this world of ours
    By increasing shader performance in games and decreasing shader performance for cGPU, which requires a vast overhaul of everything.

    If you look at fermi performance in cGPU applications, it actually shows why it consumes so much power.

    http://hothardware.com/Reviews/NVIDI...Review/?page=2

    It typically doubles and can occasionally triple the performance of a v8800(which is based on the 5870). Not only that, this card isn't even closed to being a full speced fermi and has much lower clocks.

    Fermi as it is right now, is not a scalable GPU for the future(it is a scalable cGPU for the future though). Its probably why Intel gave up an larrabee. Its difficult and probably impossible to make a card that does both.

    Quote Originally Posted by Sn0wm@n View Post
    i did make my comment because tahoe seemed to be dreaming ...


    how can nvidia reduce power by 33% while retaining the same performance ... on the same chip on a simple respin ...





    LOL not the same thing .. cars vs gpu ...




    but your right that it looks farfetched alot
    Are u purposely trying to troll? I have been emphasizing they need a redesign, I said nothing of a respin doing all that. I said with the current architecture, they won't catch cayman for performance per watt, at best catch up to performance but with higher power consumption.
    Last edited by tajoh111; 10-28-2010 at 07:11 PM.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  20. #220
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    and how long does a major overhaul take in general ????


    and how can someone manage higher shader utilisation without increasing the power and its if the games arent allready maxed out by nvidias magical driver team
    Last edited by Sn0wm@n; 10-28-2010 at 07:12 PM.
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

  21. #221
    Xtreme Addict
    Join Date
    May 2005
    Posts
    1,656
    Quote Originally Posted by Sn0wm@n View Post
    i did make my comment because tahoe seemed to be dreaming ...


    how can nvidia reduce power by 33% while retaining the same performance ... on the same chip on a simple respin ...
    Like I said, its done all the time, how did AMD reduce TDP from the first PII 940's at 3ghz and now have have a 975 at 3.6ghz in the same TDP with higher performance, as you would seem to put it, "a simple respin."

    Not to mention what they've done with the X6.
    Work Rig: Asus x58 P6T Deluxe, i7 950 24x166 1.275v, BIX2/GTZ/D5
    3x2048 GSkill pi Black DDR3 1600, Quadro 600
    PCPower & Cooling Silencer 750, CM Stacker 810

    Game Rig: Asus x58 P6T, i7 970 24x160 1.2v HT on, TRUE120
    3x4096 GSkill DDR3 1600, PNY 660ti
    PCPower & Cooling Silencer 750, CM Stacker 830

    AMD Rig: Biostar TA790GX A2+, x4 940 16x200, stock hsf
    2x2gb Patriot DDR2 800, PowerColor 4850
    Corsair VX450

  22. #222
    Xtreme Enthusiast
    Join Date
    Sep 2007
    Location
    Jakarta, Indonesia
    Posts
    924
    Quote Originally Posted by Sn0wm@n View Post
    and how long does a major overhaul take in general ????


    and how can someone manage higher shader utilisation without increasing the power and its if the games arent allready maxed out by nvidias magical driver team
    He isn't dreaming man, but just somewhat, uh, a pessimist & plenty negative when it comes to AMD graphic as of late ?

    I'm ready with my internet vacation ticket away from this forum postingwise if his US$ 500+ official MSRP predicition for Cayman XT comes to reality , OTOH if that doesn't happen .......

  23. #223
    Banned
    Join Date
    Jan 2010
    Location
    Heaven
    Posts
    227
    Only 1 GB on Cayman? Come on AMD

  24. #224
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by Sn0wm@n View Post
    and how long does a major overhaul take in general ????


    and how can someone manage higher shader utilisation without increasing the power and its if the games arent allready maxed out by nvidias magical driver team
    Thats for the engineers to decide, but its kind of scary that between generations, fermi shader power decreased compared to earlier generations and they took a step backwards. If fermi shaders were as powerful as its older shaders last generation, we would have a card thats pretty close to as fast as a 5970. That is GTX 285 SLI performance + the extra speed of added clocks.

    http://www.anandtech.com/bench/Product/167?vs=165

    The fact that they decreased gaming efficiency and more than double the transistor count shows where all the performance went.

    E.g the gtx 460 1gb has similar speed to the gtx 285 even though the gtx 460 has the shader advantage and the core and shader speed advantage.

    Quote Originally Posted by spursindonesia View Post
    He isn't dreaming man, but just somewhat, uh, a pessimist & plenty negative when it comes to AMD graphic as of late ?

    I'm ready with my internet vacation ticket away from this forum postingwise if his US$ 500+ official MSRP predicition for Cayman XT comes to reality , OTOH if that doesn't happen .......
    Besides pricing, performance wise, i think I have been more optimistic for AMD than atleast my expectations for Nvidia. I still have a feeling cayman xt is going to be 500 dollars, especially if it significantly faster than fermi(which I think it will be).
    Last edited by tajoh111; 10-28-2010 at 07:28 PM.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  25. #225
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    Quote Originally Posted by highoctane View Post
    Like I said, its done all the time, how did AMD reduce TDP from the first PII 940's at 3ghz and now have have a 975 at 3.6ghz in the same TDP with higher performance, as you would seem to put it, "a simple respin."

    Not to mention what they've done with the X6.


    so the real question is why did nvidia didnt do it allready???


    fab problems maybe???



    @tajoh111: ok they need a big overhaul .. but how much time will it take them in general to pull it off ???


    a while .. and why is the R300 hyped cayman an overhyped card .. it didnt even come from amd's mouth ... its a rumor for now ... but all the specs we saw wich could be true or false does seem to go in that direction ... and no im not trolling .. im just trying to understand how this big single chip fermi deal could be pulled off ....


    its been how much time since we started to hear about respin of the gf100 chip to make it work .. yet no full chip ... so its either a redesign to cut of the unneeded gpgpu features ... but how long does that take before we see it on market .. and will it be better to just wait for 28nm and bring those options on 28nm instead ...
    WILL CUDDLE FOR FOOD

    Quote Originally Posted by JF-AMD View Post
    Dual proc client systems are like sex in high school. Everyone talks about it but nobody is really doing it.

Page 9 of 49 FirstFirst ... 678910111219 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •