Page 2 of 3 FirstFirst 123 LastLast
Results 26 to 50 of 56

Thread: Interview with Nvidia’s Bryan Del Rizzo on the GTX400 series @ PAX East

  1. #26
    Banned
    Join Date
    Mar 2010
    Posts
    88
    Quote Originally Posted by zerazax View Post
    Power consumption by Nvidia is pretty clearly untrue. TDP of 5870 is 62W lower than the 480, but every review pegs the 480 at 100W more than the 5870. Furthermore, the GTX 480 SLI reviews show power consumption > 300W by the card (over 370 by the Anandtech review, and TechPowerUp estimates 320W by the card alone, not at the wall), which clearly shows that the card is drawing FAR more than Nvidia rated.

    So him saying 295/300W is closer to the truth than Nvidia is suggesting





    Truth be told, given that Nvidia supposedly didn't decide on 480 SP's until recently, and clocks weren't finalized until about 2 weeks ago, Charlie was probably accurate - at the time he published it.

    Remember those GTX 480 ES's supposedly clocked at 600 MHz? He probably got those clocks from there
    Unfortunately for him, no. 448sp was never correct for any final product. Neither was 600mhz ever a plan for any product.

    As for TDP, yeah it seems most reviewers are testing with furmark for absolute max possible power draw.

    But AMD has the same situation.. they claim 180 but draw well over 200 in some cases, right? So I think NVIDIA has been dragged into the same measurement methods (whether those methods are basically "max typical load excluding synthetic tests like furmark" or not) to get a fair shake.
    Last edited by Svnth; 03-27-2010 at 02:38 AM.

  2. #27
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    Quote Originally Posted by Svnth View Post
    Unfortunately for him, no. 448sp was never correct. Neither was 600mhz.
    448Sp and 600MHz was right for the 470. So in regards to the 480, he was wrong sure... but again, you have to look at the timing. Less than a month ago, 512 SPs + 600 MHz was what was confirmed, but supposedly Nvidia just a couple weeks before "launch" decided 480 SPs + 700 MHz was better.

    As for TDP, yeah it seems most reviewers are testing with furmark for absolute

    But AMD has the same situation.. they claim 180 but draw well over 200 in some cases, right? So I think NVIDIA has been dragged into the same measurement methods (whether they are basically "max typical load excluding synthetic tests like furmark" or not) to get a fair shake.
    No, not every reviewer was testing with furmark for absolute... look up the TomsHardware test. They used Unigine and saw the card outdraw even the 5970 over the time of the benchmark.

    Look at Anandtech's review... in Crysis, the 480 draws 100W more than the 5870, and 40W more than the 5970.

    In fact, Furmark was better for Nvidia if comparing the 480 to the 5970.



    The 58xx's are actually drawing what they're rated in Furmark - for the GTX 480, even in the most favorable settings, are drawing well above what they're rated.

    Look at the image above... the 5870CF has 223W drawn more than the 5870 single. Anandtech is using a 82% efficiency apparently, so 223W at the wall @ 82% efficiency = 183W consumed by the extra card, which is under the 188W it is rated for.

    So no, your statement is incorrect - Nvidia is the one drawing far above.

  3. #28
    Banned
    Join Date
    Mar 2010
    Posts
    88
    Quote Originally Posted by zerazax View Post
    448Sp and 600MHz was right for the 470. So in regards to the 480, he was wrong sure... but again, you have to look at the timing. Less than a month ago, 512 SPs + 600 MHz was what was confirmed, but supposedly Nvidia just a couple weeks before "launch" decided 480 SPs + 700 MHz was better.



    No, not every reviewer was testing with furmark for absolute... look up the TomsHardware test. They used Unigine and saw the card outdraw even the 5970 over the time of the benchmark.

    Look at Anandtech's review... in Crysis, the 480 draws 100W more than the 5870, and 40W more than the 5970.

    In fact, Furmark was better for Nvidia if comparing the 480 to the 5970.



    The 58xx's are actually drawing what they're rated in Furmark - for the GTX 480, even in the most favorable settings, are drawing well above what they're rated.

    Look at the image above... the 5870CF has 223W drawn more than the 5870 single. Anandtech is using a 82% efficiency apparently, so 223W at the wall @ 82% efficiency = 183W consumed by the extra card, which is under the 188W it is rated for.

    So no, your statement is incorrect - Nvidia is the one drawing far above.
    Actually no, your statement is incorrect. I'm well aware the NVIDIA is drawing more than AMD, but in case you weren't aware, AMD throttles furmark specifically to avoid this case. NVIDIA does not. Hence furmark isn't a great way to compare these vs. their rated draw amounts. I'm not saying NVIDIA has accurate ratings, since many seem to agree it's more than claimed.. I'm simply stating that in the maximum power draw case, AMD does it too (to some degree, I know this for a fact).

    Additionally you don't just tack on 100mhz to your core in the last two weeks.. sorry it doesn't work like that. As I explained in another post, you can sit and guess numbers all day, and eventually something will line up. The fact is Charlie guessed both 448 shaders and 512 shaders in two different articles, and was wrong on both.

    Charlie said NVIDIA could *only* hit 600Mhz, meaning he was referring to their high end SKU. You don't get to look back and say "oh he was referring to the GTX 470 all along!" just because the numbers match. He was referring to the best possible scenario for the chip, which he was drastically wrong about.
    Last edited by Svnth; 03-27-2010 at 03:02 AM.

  4. #29
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    Quote Originally Posted by Svnth View Post
    Actually no, your statement is incorrect. I'm well aware the NVIDIA is drawing more than AMD, but in case you weren't aware, AMD throttles furmark specifically to avoid this case. NVIDIA does not. Hence furmark isn't a great way to compare these vs. their rated draw amounts. I'm not saying NVIDIA has accurate ratings, since many seem to agree it's more than claimed.. I'm simply stating that in the maximum power draw case, AMD does it too (to some degree, I know this for a fact).
    You won't find me a big fan of Furmark - in fact, I loathe the fact that they made software to simulate situations that can never exist. That said, AMD added those hardware monitors on the chip to make sure it doesn't exceed the TDP (the maximum rated for safely operating the chip) - but that doesn't mean that AMD's doesn't go above their limit. In fact, in Techpowerup's review, they measure the power draw directly at the VGA, and the 5870 reaches 212W, above the 188 they're rated for.

    And aside from that fact... how do you know Nvidia doesn't do the same as ATI does now? In fact, looking back at the 4800 vs GTX 200s, the 4800s had to have a driver update to throttle Furmark, whereas the GTX 200's were doing just fine. Why would the GTX 400's suddenly do worse in management?

    Now, as I stated, I haven't been a fan of Furmark, since it's an unrealistic software, but thats why I'm looking at software loads. Look at this Tomshardware chart for a Unigine run:


    The 5870 is rated at 62W less than the 480, but draws a whopping 120W less in Unigine. Heck, the 5970 draws 60W less but is rated at 44 W more.

    Bit-tech did the same thing using 3dMark06, and the 480 drew 100W more etc. Something obviously is not adding up on power consumption and what they're being rated for

    Additionally you don't just tack on 100mhz to your core in the last two weeks.. sorry it doesn't work like that. As I explained in another post, you can sit and guess numbers all day, and eventually something will line up. The fact is Charlie guessed both 448 shaders and 512 shaders in two different articles, and was wrong on both.

    Charlie said NVIDIA could *only* hit 600Mhz, meaning he was referring to their high end SKU. You don't get to look back and say "oh he was referring to the GTX 470 all along!" just because the numbers match. He was referring to the best possible scenario for the chip, which he was drastically wrong about.
    Core clocks weren't finalized until a couple weeks ago. It's a known fact that the ES's out there were at 600 MHz and that's probably what Charlie went off of. That he said the final product would be that is his own stupidity, but
    Last edited by zerazax; 03-27-2010 at 03:35 AM.

  5. #30
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by highoctane View Post
    A gx2 should be possible with gtx470'ish specs.
    Yeah, exactly, like they did with GTX295.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  6. #31
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    City of Lights, The Netherlands
    Posts
    2,381
    Quote Originally Posted by highoctane View Post
    A gx2 should be possible with gtx470'ish specs.
    I think they will even have to castrate the GTX470 to get a GX2 that stays within the PCIe 300 Watt specification. The problem with that is, the GX2 wouldn't be faster than the 5970. FYI, I'm talking about current silicon, things will change in the future of course and I this is just my guess.
    "When in doubt, C-4!" -- Jamie Hyneman

    Silverstone TJ-09 Case | Seasonic X-750 PSU | Intel Core i5 750 CPU | ASUS P7P55D PRO Mobo | OCZ 4GB DDR3 RAM | ATI Radeon 5850 GPU | Intel X-25M 80GB SSD | WD 2TB HDD | Windows 7 x64 | NEC EA23WMi 23" Monitor |Auzentech X-Fi Forte Soundcard | Creative T3 2.1 Speakers | AudioTechnica AD900 Headphone |

  7. #32
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    There will not be a dual GF100 card with under 300W TDP using 40nm, period.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  8. #33
    Tyler Durden
    Join Date
    Oct 2003
    Location
    Massachusetts, USA
    Posts
    5,623
    Quote Originally Posted by Caparroz View Post
    EnJoy, I'm sorry I'm just refering to the teaser you posted. I'll read the whole interview tomorrow with the full atention it deserves.
    Thank you. I put a fair amount of work into it, so I hope people take the time to read it through. I thought it came out well.
    Formerly XIP, now just P.

  9. #34
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by Svnth View Post
    Don't you see the flaw in your logic though? If you make a guess, you can always find *some* product that fit that guess closely and then after the fact, in hindsight say "oh yeah I was talking about that".

    Ch

    Here's a video that kind of shows what I'm talking about: http://www.youtube.com/watch?v=1CJbOAvfMf8
    There is no flaw in my logic, because I was not trying to make any kind of logical argument. I said Charlie got a few things about Fermi right, this is a true statement.

    Geeze I can't believe I am defending Charlie Demajaran.

    In your zealous nVida fanboyism, you failed to see the point of my original idea of the humor... the response to the interviewer by the nVidia PR guy was funny. I could care less how much of Charlie's rag trash ramblings are right or not, I laughed at the nVidian response because he obviously does not like Charlie (understandably).

    Now, in terms of Fermi itself, nVidia did not lay a turd, but close to one. The performance is there, which is great -- but damn, you could roast your holiday turkey with the friggin' thing.
    Last edited by JumpingJack; 03-27-2010 at 08:00 AM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  10. #35
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    very nice alex!
    You generally never go to production with the first rev of the silicon anyway, if you do, you are awfully lucky.
    hear hear... even nvidias pr staff knows this, how come jensen still didnt get that memo? ^^

    Bryan Del Rizzo: Of course, the easiest one is double precision. There aren’t any games, there aren’t any user applications today that use double precision. But in the medical markets, in the HPC field, that’s dramatically important to them. So we’re not going waste the time and waste silicon space, and all that kind of stuff, providing a feature to gamers and incurring a cost for them, that they don’t need or possibly want.
    wut? well excuse me but tesla and geforce cards are based on the same silicon... so yes they DID put extra logic in their geforce cards and then disabled it, which DOES affect yields and make the geforce cards more expensive...

    Bryan Del Rizzo: Well, actually there probably is because I think one of our executives in the financial offices said that you can expect other variants of the GeForce family sometime in the second half of this year. So, based on what they said that probably is true.
    he works for the same company but gives press replies based on hearsay? 0_o

    TechREACTION.net: The other day, SemiAccurate, another technology blog, reported that Nvidia was forcing resellers to purchase 80 pieces of your GT2XX series cards in order to get GTX 400 series cards. I have to ask, is there any truth to this that you are aware of?

    Bryan Del Rizzo: If you look at the accuracy of that site, I think there’s a reason why it’s called SemiAccurate. Actually, I’d probably say it’s really “not accurate”. Given the immense fabrication of stories on that site, you know, it is what it is. Let me say this, if you go to Google and you search for Nvidia stories, stories from that site come up as either satire or parody. So, I’ll leave it at that.
    that doesnt answer his question though... he say whether its true or not...

    balls to the wall performance
    first time i hear this
    what exactly does it mean?

    i like his point of view, focussing on the experience and not that much on performance and benchmarks... because in the end its the experience that counts, and its something you cant argue about, you either like it or dont... when it comes to benchmarks people go wild and argue about the settings etc...
    Last edited by saaya; 03-28-2010 at 01:36 AM.

  11. #36
    Banned
    Join Date
    Mar 2010
    Posts
    88
    Quote Originally Posted by JumpingJack View Post
    There is no flaw in my logic, because I was not trying to make any kind of logical argument. I said Charlie got a few things about Fermi right, this is a true statement.

    Geeze I can't believe I am defending Charlie Demajaran.

    In your zealous nVida fanboyism, you failed to see the point of my original idea of the humor... the response to the interviewer by the nVidia PR guy was funny. I could care less how much of Charlie's rag trash ramblings are right or not, I laughed at the nVidian response because he obviously does not like Charlie (understandably).

    Now, in terms of Fermi itself, nVidia did not lay a turd, but close to one. The performance is there, which is great -- but damn, you could roast your holiday turkey with the friggin' thing.
    Way to try to turn this into some fanboy war, kid. I can't believe you can sit here and defend him.. your logic is simply flawed, I've pointed it out clearly and you pick and choose vague concepts that he was "right" on. I'm sorry, but the things you can consider him right on were already known by all.

    This isn't even a discussion about Fermi being good or bad.. it's just a discussion about Charlie, and you've managed to accuse me of being a fanboy. Congratulations.

    I'll return the favor and call you an AMD fanboy I guess? Great way to avoid the substance of the conversation isn't it?

    I know what you were joking about, but my posts have specifically addressed your mention of Charlie getting a bunch of things right, which I disagree with. There was no information of value in the end, given it was pretty much all incorrect, except for the "it's a big chip, and it will be hot". In my opinion, asserting something as specific as SPs, performance, power draw, manufacturability, how much it costs to manufacture, yields, and getting it pretty much all wrong is pretty much all I need to know the quality of Charlie's information. I'd hope you could look at all the things he guessed and compare what he got right vs. what he got wrong, instead of noticing 1 or 2 things that happened to be vaguely right (out of dozens of assertions, the rest of which were wrong). Come on..
    Last edited by Svnth; 03-28-2010 at 01:52 AM.

  12. #37
    Xtreme Cruncher
    Join Date
    Jun 2006
    Posts
    6,215
    Quote Originally Posted by JumpingJack View Post
    There is no flaw in my logic, because I was not trying to make any kind of logical argument. I said Charlie got a few things about Fermi right, this is a true statement.

    Geeze I can't believe I am defending Charlie Demajaran.

    In your zealous nVida fanboyism, you failed to see the point of my original idea of the humor... the response to the interviewer by the nVidia PR guy was funny. I could care less how much of Charlie's rag trash ramblings are right or not, I laughed at the nVidian response because he obviously does not like Charlie (understandably).

    Now, in terms of Fermi itself, nVidia did not lay a turd, but close to one. The performance is there, which is great -- but damn, you could roast your holiday turkey with the friggin' thing.
    What's clear to me in the case of Fermi design is that Nvidians didn't target the gaming market but a GPGPU capabilities instead. They expected that it will perform good in gaming workloads but obviously it's not the champion when it comes to perf./watt or even absolute performance. It IS faster than a single Cypress in most cases but the difference of 10 or 15% on average is really NOT a big advantage,given the size of the darn chip ,the power it gobbles and the heat output. Yields/cost per card is probably much lower/higher than Cypress(used the word "probably" even though I know the word "definitely" should be there instead).
    So it's not a complete turd,but it's close...

  13. #38
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,955
    Quote Originally Posted by Svnth View Post
    Way to try to turn this into some fanboy war, kid. I can't believe you can sit here and defend him.. your logic is simply flawed, I've pointed it out clearly and you pick and choose vague concepts that he was "right" on. I'm sorry, but the things you can consider him right on were already known by all.
    Kid? I've known Jack for years and I can tell you that he's certainly one of the most knowledgeable people here, and probably has had more experience with computers than some members have been alive. You are clearly biased, Charlie runs a rumormill and has given us some useful information at times and garbage at other times. Get over it and stop trying to create a flamewar. You've done the majority of the instigating in this situation and I suggest you stop before you get banned.

    Don't insult him because you didn't get your way, especially considering how little weight your word holds right now. You've been on this site all of a week, do yourself a favor and try to make a better first impression.
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  14. #39
    Xtreme Member
    Join Date
    Mar 2008
    Location
    utah ogden
    Posts
    110
    Quote Originally Posted by zerazax View Post
    448Sp and 600MHz was right for the 470. So in regards to the 480, he was wrong sure... but again, you have to look at the timing. Less than a month ago, 512 SPs + 600 MHz was what was confirmed, but supposedly Nvidia just a couple weeks before "launch" decided 480 SPs + 700 MHz was better.



    No, not every reviewer was testing with furmark for absolute... look up the TomsHardware test. They used Unigine and saw the card outdraw even the 5970 over the time of the benchmark.

    Look at Anandtech's review... in Crysis, the 480 draws 100W more than the 5870, and 40W more than the 5970.

    In fact, Furmark was better for Nvidia if comparing the 480 to the 5970.



    The 58xx's are actually drawing what they're rated in Furmark - for the GTX 480, even in the most favorable settings, are drawing well above what they're rated.

    Look at the image above... the 5870CF has 223W drawn more than the 5870 single. Anandtech is using a 82% efficiency apparently, so 223W at the wall @ 82% efficiency = 183W consumed by the extra card, which is under the 188W it is rated for.

    So no, your statement is incorrect - Nvidia is the one drawing far above.
    Well, even looking att Anandtech's graph, when you do a little math, the 480 while very very power hungry, doesn't exceed its TDP by as much as it appears even on a worst case situation. Let's use the 480 SLI measurement at 851 watts. Lets say 151 is the system, so now we have 350 for each card, and at 80 percent efficiency they actually use 280 watts each, 30 watts above their TDP.

    We have seen many cards draw that much above their TDP on Furmark before, what I believe is happening, is these cards are hitting numbers close to their TDP, above or below much more often than other graphic cards of the past, making the power numbers much more apparent. But from what I have seen, the TDP number isn't too far out of place, it is just hitting that limit much more often than normal.

  15. #40
    Tyler Durden
    Join Date
    Oct 2003
    Location
    Massachusetts, USA
    Posts
    5,623
    Quote Originally Posted by saaya View Post
    very nice alex!
    hear hear... even nvidias pr staff knows this, how come jensen still didnt get that memo? ^^

    wut? well excuse me but tesla and geforce cards are based on the same silicon... so yes they DID put extra logic in their geforce cards and then disabled it, which DOES affect yields and make the geforce cards more expensive...

    he works for the same company but gives press replies based on hearsay? 0_o

    that doesnt answer his question though... he say whether its true or not...

    first time i hear this
    what exactly does it mean?

    i like his point of view, focussing on the experience and not that much on performance and benchmarks... because in the end its the experience that counts, and its something you cant argue about, you either like it or dont... when it comes to benchmarks people go wild and argue about the settings etc...
    Thanks Sascha.

    Yea, the hardware differences was a particularly interesting response and as far as I know, he is wrong as they are indeed the same silicon. But I wasn't going to correct him.

    But everything else was good banter, I thought it came out well.
    Formerly XIP, now just P.

  16. #41
    Xtreme Enthusiast
    Join Date
    Jul 2004
    Posts
    535
    Yeah, I think he was the first to predict 480SP, but then he changed his tune a bit last minute when a couple of his sources got their hands on lower clocked 512sp cards. Can't really fault the man on that one.

  17. #42
    Xtreme Addict
    Join Date
    Jan 2008
    Location
    Puerto Rico
    Posts
    1,374
    Quote Originally Posted by zalbard View Post
    There will not be a dual GF100 card with under 300W TDP using 40nm, period.
    I dare to say 28nm would not help to meet this goal either....
    ░█▀▀ ░█▀█ ░█ ░█▀▀ ░░█▀▀ ░█▀█ ░█ ░█ ░░░
    ░█▀▀ ░█▀▀ ░█ ░█ ░░░░█▀▀ ░█▀█ ░█ ░█ ░░░
    ░▀▀▀ ░▀ ░░░▀ ░▀▀▀ ░░▀ ░░░▀░▀ ░▀ ░▀▀▀ ░

  18. #43
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Ontario, Canada
    Posts
    231
    GTX470 draws the same power as a GTX275..... there's your GX2 right there. I think a tri-slot cooler would be best but even if they make the card longer (11.5"?) they can fit a lot of cooling into it.

  19. #44
    Xtreme Addict
    Join Date
    Feb 2007
    Location
    Arizona, USA
    Posts
    1,700
    Oh sure, they could probably do a GTX495, but it would need to be a tri or even quad slot card, and not be PCI-E certified because it would be so far over 300w it isn't even funny.


    Core i7 920 D0 B-batch (4.1) (Kinda Stable?) | DFI X58 T3eH8 (Fed up with its' issues, may get a new board soon) | Patriot 1600 (9-9-9-24) (for now) | XFX HD 4890 (971/1065) (for now) |
    80GB X25-m G2 | WD 640GB | PCP&C 750 | Dell 2408 LCD | NEC 1970GX LCD | Win7 Pro | CoolerMaster ATCS 840 {Modded to reverse-ATX, WC'ing internal}

    CPU Loop: MCP655 > HK 3.0 LT > ST 320 (3x Scythe G's) > ST Res >Pump
    GPU Loop: MCP655 > MCW-60 > PA160 (1x YL D12SH) > ST Res > BIP 220 (2x YL D12SH) >Pump

  20. #45
    Xtreme Addict
    Join Date
    Feb 2008
    Location
    America's Finest City
    Posts
    2,078
    Quote Originally Posted by hurleybird View Post
    Yeah, I think he was the first to predict 480SP, but then he changed his tune a bit last minute when a couple of his sources got their hands on lower clocked 512sp cards. Can't really fault the man on that one.
    Actually, i think we hit the 480SP on the head first...

    Then everyone else started saying 512 and we stood our ground, and we were right.
    Quote Originally Posted by FUGGER View Post
    I am magical.

  21. #46
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    Quote Originally Posted by Svnth View Post
    Actually no, your statement is incorrect. I'm well aware the NVIDIA is drawing more than AMD, but in case you weren't aware, AMD throttles furmark specifically to avoid this case. NVIDIA does not. Hence furmark isn't a great way to compare these vs. their rated draw amounts. I'm not saying NVIDIA has accurate ratings, since many seem to agree it's more than claimed.. I'm simply stating that in the maximum power draw case, AMD does it too (to some degree, I know this for a fact).

    Additionally you don't just tack on 100mhz to your core in the last two weeks.. sorry it doesn't work like that. As I explained in another post, you can sit and guess numbers all day, and eventually something will line up. The fact is Charlie guessed both 448 shaders and 512 shaders in two different articles, and was wrong on both.

    Charlie said NVIDIA could *only* hit 600Mhz, meaning he was referring to their high end SKU. You don't get to look back and say "oh he was referring to the GTX 470 all along!" just because the numbers match. He was referring to the best possible scenario for the chip, which he was drastically wrong about.
    AMD does not specifically throttle furmark. From anandtech:

    This brings us to Cypress. For Cypress, AMD has implemented a hardware solution to the VRM problem, by dedicating a very small portion of Cypress’s die to a monitoring chip. In this case the job of the monitor is to continually monitor the VRMs for dangerous conditions. Should the VRMs end up in a critical state, the monitor will immediately throttle back the card by one PowerPlay level. The card will continue operating at this level until the VRMs are back to safe levels, at which point the monitor will allow the card to go back to the requested performance level. In the case of a stressful program, this can continue to go back and forth as the VRMs permit.
    This implies that if ATI knew what it was doing at all, it would set the chip to throttle if the card starts drawing more than its rated TDP. Hence: what you see is what you get, something that is questionable with Fermi.

    EnJoY: He did not specifically refute Charlie's claims of the 'buy 80, get xx GF100!', which was fishy.
    Last edited by cegras; 03-28-2010 at 09:07 PM.
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

  22. #47
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    svnth, click the red x somewhere next time you visit this thread. If you dont, you'll probably find yourself in ban land.

    All along the watchtower the watchmen watch the eternal return.

  23. #48
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    I love how he makes out 295 is made from 2 x 285 cores... NUB
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  24. #49
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    Alberta, Canada
    Posts
    1,264
    Quote Originally Posted by zalbard View Post
    There will not be a dual GF100 card with under 300W TDP using 40nm, period.
    Couldn't agree more. Next node down... I'll believe it. I have my doubts that Nvidia will be able to get those out in a time frame to compete with AMD though... ( Nvidia totally will aggro AMD and wipe the shareholder raid, this is why you must sheep the refresh adds! PS : this comment is retarded so please disregard it )
    Feedanator 7.0
    CASE:R5|PSU:850G2|CPU:i7 6850K|MB:x99 Ultra|RAM:8x4 2666|GPU:980TI|SSD:BPX256/Evo500|SOUND:2i4/HS8
    LCD:XB271HU|OS:Win10|INPUT:G900/K70 |HS/F:H115i

  25. #50
    Tyler Durden
    Join Date
    Oct 2003
    Location
    Massachusetts, USA
    Posts
    5,623
    UPDATE: Bryan was very tired when he did this interview with me and made a mistake in mentioning that the GeForce did not contain hardware functionality for Double-Precision. What he meant to say was that the GeForce does not use ECC, where-as the Tesla parts do.
    Formerly XIP, now just P.

Page 2 of 3 FirstFirst 123 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •