Originally Posted by motown_steve
Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.
Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.
INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"Has anyone really been far even as decided to use even go want to do look more like?
no, charlie has nothing to do with this... you know, not everybody who questions a rumor that would be good news for nvidia has some connection to charlie... some people are really paranoid when it comes to charlie
it doesnt really make sense for nvidia to have that many parts at launch... they couldnt sell any cards for over half a year now, they just couldnt... and now all the sudden they have a big batch of cards for the launch? when even jensen said gf100 cards will only REALLY be available in the second half of this year? so there, nothing to do with charlie, simply based on what mr nvidia himself said a launch with 60k cards is unlikely...
what do you mean? a healthy balance of thesis and antithesis is usually the best imo... bull and bear...
yes, just like with rv870 that doesnt scale all that well with more bw either...
Last edited by saaya; 03-12-2010 at 11:33 AM.
Both GTX 470/ 480 ES tested are downclocked versions, and they using an old version of drivers-for GTX 470/480.625/1250/900...
those are really low gddr5 clocks...
5870 comes at 1200, thats 33% higher...
i wonder if nvidia clocked the mem that low to diferentiate the 470 and 480, or if they really cant clock higher...
The clocks will be a little higher( not 750 dream).... probably 675/1350/1100( GTX 470) and 650/1300/1000 GTX 480.
To much hAT(e)I on this thread....Ati fanboys forget their 2900XT dark age, and they forget their nightmare 8800GTX...
Both GTX 470/480 will be good cards...
Hopefuly GTX 460 will come in may and smash GTX 5770/5830/5850 at price/performance.
i5 2500K@ 4.5Ghz
Asrock P67 PRO3![]()
P55 PRO & i5 750
http://valid.canardpc.com/show_oc.php?id=966385
239 BCKL validation on cold air
http://valid.canardpc.com/show_oc.php?id=966536
Almost 5hgz , air.
I'll believe the ES/non final BIOS part.
Why would you figure the 480 would have a lower memory clock... If anything the memory clock will be at least the same, most likely more. The extra bit width helps but not that much.
As far as hate, I wouldn't go that far. We've just yet to see / hear anything that contridicts the direction things seem to be going. Anyways how are past gpu generations success or lack there of relavent? Yes the 2900XT sucked and yes the 8800GTX took the trash to the curb but what does that have to do with today? I don't care if Nvidia released an awesome product line in the past ( I owned a 8800gtx for a year and a half from launch week ) if they don't continue to do so.
In closing it would be nice if all of our pessimism could be greeted with a pleasent surprise is all.
Feedanator 7.0
CASE:R5|PSU:850G2|CPU:i7 6850K|MB:x99 Ultra|RAM:8x4 2666|GPU:980TI|SSD:BPX256/Evo500|SOUND:2i4/HS8
LCD:XB271HU|OS:Win10|INPUT:G900/K70 |HS/F:H115i
tsmc's public production rates for the 40nm process
if tsmc is only able to produce 80,000 wafers per quarter, i just can't see nvidia having 60,000 gf100 chips ready and waiting right now. availability will probably be scarce until tsmc ramps production further. i'm not doubting that there will be cards ready on the 26th, but good luck getting one for a reasonable price...
depends on the yields...
thats around 1K wafers per day...
lets say nvidia can use up half of tsmcs capacity, rather optimistic, but lets see where that takes us...
100% yield = 100 fermis per wafer (roughly) = 50k fermis per day
50% yield = 50 fermis per wafer = 25k fermis per day
25% yield = 25 fermis per wafer = 12K fermis per day
10% yield = 10 fermis per wafer = 5k fermis per day
even at 10% yields nvidia could stock 60k fermis in 2 weeks using their tsmc capacity for fermi only, or in 4 weeks using half of their allocated capacity.
so this actually makes sense and sounds very possible...
this only shows that its possible with tsmcs wafer output... if the yields would really be 10% then each chip would cost 500$ and it just wouldnt make sense for nvidia to produce a lot of chips, so while tsmc COULD supply nvidia with 60k chips in one month even at low yields, nvidia probably wouldnt do it.
provided the wafer output is correct... the amount of fermis available at launch depends entirely on nvidia... if they want, they can launch it with 60k available right away. if its less, then its most likely not tsmcs fault, but yields are so bad that it doesnt make sense for nvidia to produce a lot.
Last edited by saaya; 03-12-2010 at 11:56 AM.
I sure hope the GDDR5 is at least 4000 and not merely 3600. 144GB/s is on the low side for a new card even if it isn't the top sku... And if the 275watt info is true, the 480 better be like a rabit out of a hat otherwise...
And lol at Charlie. You can totally tell that he is making fun by claiming he was wrong ( when infact more than anything it proves he was right... +/-5watts is nothing to fight about )
Feedanator 7.0
CASE:R5|PSU:850G2|CPU:i7 6850K|MB:x99 Ultra|RAM:8x4 2666|GPU:980TI|SSD:BPX256/Evo500|SOUND:2i4/HS8
LCD:XB271HU|OS:Win10|INPUT:G900/K70 |HS/F:H115i
So it only takes one day for TSMC to get the order, cook the wafer, get it cut, test it, send it to Nvidia to test, Nvidia to get the kits to AIBs, and AIBs to get the cards manufactured, packed and shipped?
Hmmm... I guess we dont have a facepalm smiley.
Also, although we don't know the exact numbers of parked Ax wafers, those wafers have to last them until Bx is ready... which best case is end of Q2.
Last edited by LordEC911; 03-12-2010 at 12:13 PM.
Originally Posted by motown_steve
Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.
Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.
That looks like the HDMI (audio out) connector,anyone knows what it actually is?
edit*
Thanks VVV![]()
Last edited by SocketMan; 03-12-2010 at 12:18 PM.
Work Rig: Asus x58 P6T Deluxe, i7 950 24x166 1.275v, BIX2/GTZ/D5
3x2048 GSkill pi Black DDR3 1600, Quadro 600
PCPower & Cooling Silencer 750, CM Stacker 810
Game Rig: Asus x58 P6T, i7 970 24x160 1.2v HT on, TRUE120
3x4096 GSkill DDR3 1600, PNY 660ti
PCPower & Cooling Silencer 750, CM Stacker 830
AMD Rig: Biostar TA790GX A2+, x4 940 16x200, stock hsf
2x2gb Patriot DDR2 800, PowerColor 4850
Corsair VX450
who said that? i talked about throughput, not latency... nvidia has A3 since when?
its not like they didnt have time to process the wafers, test, bin, build the cards etc
and like you said, they had quite some wafers prepared, so, assuming those werent broken beyond repair, that saved them some time as well...
why? why cant they kick off new wafers? why couldnt they have done so already?
an anonymous quote of somebody promising awesome performance... how can you post something like this? at least post a link to where you found it to give it SOME credibility...
why? even in 28nm gf100 will be a big chip and whos to say how good yields in 28nm are?
Fermi = 225W (or 275W)
Not the end of the world!!
Probably while using some special GPGPU applications only.
Despite all the nay-saying, it IS a FIXABLE problem. Just look at progress G92 made just by switching PCB, or reduced power on higher clocked HD4890.
Charlie is just being ultra-paranoid. Architecture is no more broken than it was in X2900XT - and despite power management and Anti-aliasing bugs, that card still got great performance in many games, and was the basis for HD48xx.
In the end, when it launches, (just as the HD5xxx) the hype will evaporate to reveal pretty typical average looking video cards with double horsepower of old generation.
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
Work Rig: Asus x58 P6T Deluxe, i7 950 24x166 1.275v, BIX2/GTZ/D5
3x2048 GSkill pi Black DDR3 1600, Quadro 600
PCPower & Cooling Silencer 750, CM Stacker 810
Game Rig: Asus x58 P6T, i7 970 24x160 1.2v HT on, TRUE120
3x4096 GSkill DDR3 1600, PNY 660ti
PCPower & Cooling Silencer 750, CM Stacker 830
AMD Rig: Biostar TA790GX A2+, x4 940 16x200, stock hsf
2x2gb Patriot DDR2 800, PowerColor 4850
Corsair VX450
If it comes out like the 2900XT that is horrible for the short term though. They were hyping it to be another 8800GTX, so if comes in at anything less than 30% over the 5870 it's a short term failure in my mind. In the long run, it's yet to be seen how the architecture will work, the speculation on the first series is already grasping at straws so how can you guess the long term viability of it?
I think the hype has gotten more and more negative the more we know about it, so a typical card, it is now. No typical card has gotten thousands of posts a month before it's launch.
Intel Core i7-3770K
ASUS P8Z77-I DELUXE
EVGA GTX 970 SC
Corsair 16GB (2x8GB) Vengeance LP 1600
Corsair H80
120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
Corsair RM650
Cooler Master Elite 120 Advanced
OC: 5Ghz | +0.185 offset : 1.352v
Problem was - nobody cared anymore. (2900XT)that card still got great performance in many games
Yet nVidia's biggest problem isn't just rushing Fermi out. The whole company seems to have a problem with management and execution not going in sync, and a lot of their grand ambitions in previous years although accomplished to an extent (Consumer CUDA apps etc) are now not really worked on anymore.
And how much power would GF104 take then? How fast will it perform relative to its power consumption? And if this 275W faceplant is true, how the heck are you still gonna get the midrange in laptops?
And ATI is doing all right with releasing so many 40nm products while knowing that TSMC has limited capacity?! Also not the best choice.
They could have sold much more 55nm products at the end of last year, nvidia too.
eric66, ATI has the same problems with power consumption. They use the same process. You will see in 2 weeks.
why do we have to wait for 2 weeks to see atis power consumption problems?
yeah, ati suffers from tsmcs issues as well, but i dont think its that much...
tsmc issues are pushing ati to sell their parts at a premium instead of flooding the market with low priced parts like they are used to... thats actually a good thing, i dont think they intended this, but the capacity shortage is pushing their asp up which isnt a bad thing at all... and they still have great 55nm entr level and mainstream parts they can sell for super cheap...
just like nvidia still has G92 in 55nm which they can sell in large numbers for super cheap...
Bookmarks