i think we need to start seeing much better multi gpu scaling, with things like shared memory. the performance king is not the best selling card, and may not be the most profitable card, but somehow its the only thing most people would consider when trying to determine a winner.
if we could get gpus to be 100$ a piece (with the bad yields selling for 50$) and you just buy the 1x, 2x, or the 4x and currently fit up to 4 of them into your PC, you can now use the same core and sell it for 50-400$. thats the direction i hope they are aiming for. trying to make a gpu greater than 300mm2 i think is an utter waste when good design and drivers can probably save millions in development cost and silicon waste. (but i dont have a degree in chip engineering and everything i said has the potential to be 100% impossible)
Originally Posted by motown_steve
Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.
Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.
Last time that happened from nVidia's side, do you know what happened? G80.
And only a week or so before effective launch, enthusiast comunity knew what to expect, and even then we were surprised by it.
I am not saying that G300 will be revolutionary (performance wise) as G80 was, but it might be.
Patience is what you have to have.
Are we there yet?
Originally Posted by motown_steve
Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.
Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.
G92 9800GTX ---> G200 GTX 280 is like a 3x speed increase.
Just run Crysis and set everything to enthusiast at 1920x1200, G92 tri sli will crumble in face of GTX 280.Oh wait, lets make it a bit more fair and use an overclocked 280 version, maybe Evga FTW or BFG OCXE? Since 9800GTX was nothing more then an overclocked 8800 GTS 512 anyways
Honestly its really hard to compare performance of cards of different generations. Also how do you compare SLI / CF setups vs next gen single cards? There is always perfromance loss due to software overhead of SLI/CF so you will rarily experience the setups true potential. Do you use games or benchmarks or just go by pure theoretical performance numbers (flops etc). So just by specs alone I can confidently say that G300 will pull ahead of 285 SLI set up easy because of the 2x of shaders and boostage in bandwith and core clocks.
Also rv870 (5870) is more 2.26~2.3 faster then rv770 (4870) mostly due to clock bumps. If anything 5870 is like perfect HD 4890 crossfire.
G200 B2 or B3? Because if its bigger then the 65nm B2....mother of god.
I wonder if nvidia looked at the rv770 vs g200 debacle the same way as most of us did. I think what Nvidia really took home was not that they made G200 way too big but that they DID NOT make it BIG ENOUGH thus failing to create a significant performance lead over rv770. I guess the way Nvidia thinks is that if they can make a gpu so powerful that the competition can barely compare the consumer will care less about the technicalities.
Last edited by LiquidReactor; 09-16-2009 at 01:43 PM.
I just think NVIDIA rather sooner than later should drop this silly tactics to focus mainly on highend, it's not a healthy business in the long run try to rely on releasing a big but fast card as there's many disadvantages that comes from this, mainly development time & cost, greater risk for yield issues, cooling & power limitation issues and not to mention there's far less customers in this price range than what ATI is focusing.
NVIDIA should try to offer a great from top-to-bottom product range based on a new arch, if they manage to pull a successful series then that would be disastrous to ATIs current tactics which would have to lower prices greatly and possibly still not getting any sales, it would start chewing on their market share which atm is what ATI is very interested.
The perhaps biggest problem right now is that even if their big fat chip is fast, ATI will still get sales as they have nothing that competes in the same price range. I just don't see the logic behind NVIDIA atm, hopefully HD 5xxx series will teach em'.
This is what I'd personally do for the next arch after GT300:
Development: focus on finding a better alternative to stream processors that are more efficient, I think it's about time to move on from them now. Or at least find ways to make them more efficient if there's no alternative made up. Also spend some time looking into the possibilities of multithreading and how to make multicore cpus cooperate better with GPUs.
I'd also make it clear to the engineers, sth like a strict 190W TDP and 50% performance increase over last gen as a goal. I'd put slightly more focus on the power consumption after GT300 than performance, the next wave can focus more on performance.
Schedule: Release first a lowend segment (think "GT460" or whatever) of the new arch like 3 - 4 months before the mid/highend (GT400) as teasers. Good for getting recognition and so you can start the marketing of the new features earlier and to simply test the grounds of the new arch b4 going to the bigger and more complex cards.
Last edited by RPGWiZaRD; 09-16-2009 at 02:43 PM.
Intel? Core i5-4670K @ 4.3 GHz | ASRock Extreme6 Z87 | G.Skill Sniper 2x8GB @ DDR4-1866 CL9 | Gigabyte GTX 970 OC Windforce 3x | Super Flower Titanium 1000W | ViewSonic VX2268wm 120Hz LCD | Phanteks PH-TC14PE | Logitech MX-518 | Win 7 x64 Professional | Samsung 850 EVO & 840 Pro SSDs
If all people would share opinions in an objective manner, the world would be a friendlier place
G200A2, the 65nm core.
G200B2/3 are both 55nm cores, B2 were early GTX285 samples but mainly used for Quadro cards.
I must be because I am really confused...
He was listing silicon bigger than G200, which I thought you were doing the same.
Only other thing would be if you were listing a few of the 500 that are smaller...![]()
Originally Posted by motown_steve
Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.
Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.
Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
—Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.
lets just assume g300 is more than 2x as fast as g200... then what? what are you gonna do with all that gpu power? there are no new demanding games coming out for a while, the only one i know of is crysis2...
unless you have a 30" display, what would you want that much gpu power for?
wha?
yeah thats what i dont get either... was their strategy really to have g200 and g92 only? one highend chip and use the last gens highend as mainstream? thats a terrible strategy leaving wide gaps... i really hope they already learned their lesson with g200 and the rumors about several cut down g300 parts coming out soon after g300 are true...
its getting hard to justify buying a super highend vga these days unless your benching... same for cpus... there is not that much you gain from going mainstream to highend these days...
i think its true, but its for only 4 hotlot wafers... so it doesnt really mean too much... what it means is that g300 is not ready for mass production and even a press launch at the end of this year seems unlikely. how much time and work itll take to get yields up is really unclear... its like putting together a pc in 3 minutes asap in a big rush, and then it hangs on bootup... it doesnt work, but you cant tell how long itll take to fix... could be a very simple thing you overlooked since you rushed it... or it could mean one part is damaged or you need lots of debugging to get it running... kinda similar here i think...
Last edited by saaya; 09-16-2009 at 05:17 PM.
folding@homebut i agree with you. consoles are very far behind and it would be overkill to buy these things. consoles are like wii's now.
yep, the only thing to beat gt200a is ULSI.which would take 100 years to successfully make a microprocessor. i was hinting that no one makes bigger dies than nV. if you think all the way back to the pentium it was almost 300mm2 on a 200mm wafer so that is pretty close.
Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
—Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.
maybe why nvidia partners are still churning out and promoting new models of gtx 275/285 and 295 with custom pcbs etc ?
Assuming there are 104 dies on a wafer as Charlie said, then according to my die size calculator the die area will be about 534 mm^2 if the die happened to be almost perfectly square (~23.1mm x ~23.1mm). That would fit what you read in the article with the size being between GT200 and GT200b.
Last edited by eRacer; 09-16-2009 at 06:50 PM.
Originally Posted by flippin_waffles on Intel's 32nm process and new process nodes
Thats funny I was just telling someone that they must be having trouble with the 300 series because there has been no news about it that I have seen lately.
Could have sworn they demoed a tesla card based on 300 last December though...
--lapped Q9650 #L828A446 @ 4.608, 1.45V bios, 1.425V load.
-- NH-D14 2x Delta AFB1212SHE push/pull and 110 cfm fan -- Coollaboratory Liquid PRO
-- Gigabyte EP45-UD3P ( F10 ) - G.Skill 4x2Gb 9600 PI @ 1221 5-5-5-15, PL8, 2.1V
- GTX 480 ( 875/1750/928)
- HAF 932 - Antec TPQ 1200 -- Crucial C300 128Gbboot --
Primary Monitor - Samsung T260
Yes, which are all by Intel which apparently is the world leader in semiconductor manufacturing process, doing the chips at their own fabs. Besides the chips have much more caches(less prone for defects) than GT200, which makes things even worse for GT200.
No matter how the situation is twisted or folded, GT200 is HUGE. Smaller dies are always better, and such huge dies are just bad, bad and bad.
must have been gt200 based, def... they usually release tesla and professional graphics a couple of months after the end user cards...
comparing gtx285 vs gts250 in crysis and crysis warhead doesnt make sense cause both are in the unplayable or barely playable range imo, even at 1280x1024 a gtx285 only pulls around 20/30 fps (min-av) the gtx285 is only 50-100% faster here...
add 10 fps and you got the stalker clear sky results, and like above, the gtx285 is 50-100% faster than the gts250.
and now for gta4, lol... a 9800gtx+ does about the same as a gtx280 here... ouch...
http://www.pcgameshardware.com/aid,6...eviews/?page=2
g200 3x as fast as g92 my 4ss :P
and thats a quadcore@3.33ghz, thats where cpus stop scaling with gta4... so dont call it cpu limited :P
http://www.pcgameshardware.de/aid,66...l/Test/?page=2
![]()
Last edited by saaya; 09-16-2009 at 09:34 PM.
Bookmarks