i hope im right, the ocers will have an x850 w/ 512, but ati will show off same card in driver heaven article.Quote:
Originally Posted by DevilsRejection
Printable View
i hope im right, the ocers will have an x850 w/ 512, but ati will show off same card in driver heaven article.Quote:
Originally Posted by DevilsRejection
I saw some talk about Cell, and I wanted to make some corrections:
--The Cell is not dual core, it has 9 cores.
--The Cell is not rated at 4.5Ghz, it actually runs at 4.5Ghz, and up to 5.2Ghz with enough voltage and a big PSU. That's on air mind you, on cascade you would probably get close to 10Ghz.
--The Cell theoretically can get 256GFLOPs performance, whereas a P4 even at 7.2Ghz would probably only get about 15.
--The reason Cell can achieve such speeds is because it exchanges control logic for processing logic. The actual processing units of P4s and AMDs make up a small portion of the overall chip (excluding cache), the rest is eaten up by control logic. Of course, by removing all that control logic it means the chip's performance is far more sensitive to how you program it. Now you have to control branch prediction, pre-fetching for caches, etc all manually. Of course this is primarily with the SPUs, the PPC core retains brach prediction, auto-prefetch, etc however it doesn't have out of order execution, so even then you have to be more careful. Even so, many of these problems, while seemingly very difficult, are imho actually quite manageable with the advancement of compilers and thinking differently about how you program.
Essentially the Cell takes the best from modern CPUs and GPUs and puts them together for awesome performance, and imho a better programming model. When you program the Cell, you really are programming the hardware directly. You are not hidden behind a monolith of control logic that totally re-compiles and changes your code to what the CPU sees fit. Rather such responsiblity is left to the programmer, where imho it should lie. With Cell you simply can't get away with being a bad programmer.
Of course realisitcally you won't get 256GFLOPs of performance, maybe 150GFLOPs sustained, which is still about 10x what a 7.2Ghz P4 can do. Then put the Cell on cascade, and that very same P4 will look like ENIAC in compairison, lol. Personally I can't wait to get one, high quality real time raytracing might be possible on such a processor. Does Cell mean the end of the GPU then, far from it imho. I believe it will allow a much more seamless integration of the GPU, and any other co-processor, within the computer system. They will be treated not as disparate devices, but rather extensions of the CPU itself. Sure, x86 processor will last a long time thanks to their extensive codebase, but Cell and Cell-like processors are the future. Especially so, when it's certainly powerful enough to run a virtual x86 processor (i.e. emulate it) and still beat the doors of any P4, even the overclocked ones.
Ok LOL No one ever gain will accuse either me Gf4 or ANYONE OTHER PERSON for TOO MUCH SPECULATION ROFL !!!!!Quote:
I saw some talk about Cell, and I wanted to make some corrections:
--The Cell is not dual core, it has 9 cores.
--The Cell is not rated at 4.5Ghz, it actually runs at 4.5Ghz, and up to 5.2Ghz with enough voltage and a big PSU. That's on air mind you, on cascade you would probably get close to 10Ghz.
--The Cell theoretically can get 256GFLOPs performance, whereas a P4 even at 7.2Ghz would probably only get about 15.
--The reason Cell can achieve such speeds is because it exchanges control logic for processing logic. The actual processing units of P4s and AMDs make up a small portion of the overall chip (excluding cache), the rest is eaten up by control logic. Of course, by removing all that control logic it means the chip's performance is far more sensitive to how you program it. Now you have to control branch prediction, pre-fetching for caches, etc all manually. Of course this is primarily with the SPUs, the PPC core retains brach prediction, auto-prefetch, etc however it doesn't have out of order execution, so even then you have to be more careful. Even so, many of these problems, while seemingly very difficult, are imho actually quite manageable with the advancement of compilers and thinking differently about how you program.
Essentially the Cell takes the best from modern CPUs and GPUs and puts them together for awesome performance, and imho a better programming model. When you program the Cell, you really are programming the hardware directly. You are not hidden behind a monolith of control logic that totally re-compiles and changes your code to what the CPU sees fit. Rather such responsiblity is left to the programmer, where imho it should lie. With Cell you simply can't get away with being a bad programmer.
Of course realisitcally you won't get 256GFLOPs of performance, maybe 150GFLOPs sustained, which is still about 10x what a 7.2Ghz P4 can do. Then put the Cell on cascade, and that very same P4 will look like ENIAC in compairison, lol. Personally I can't wait to get one, high quality real time raytracing might be possible on such a processor. Does Cell mean the end of the GPU then, far from it imho. I believe it will allow a much more seamless integration of the GPU, and any other co-processor, within the computer system. They will be treated not as disparate devices, but rather extensions of the CPU itself. Sure, x86 processor will last a long time thanks to their extensive codebase, but Cell and Cell-like processors are the future. Especially so, when it's certainly powerful enough to run a virtual x86 processor (i.e. emulate it) and still beat the doors of any P4, even the overclocked ones.
There's enough there for a year's worth of it !!!
Seriously, DudeMeister, if I asked you for a link, I imagine you'd have as much trouble finding one as convincing me that it can go from 4.5 stock to 10ghz LOL
Perkam
Are you sure you all havent been lead up the garden path by him slipping one little word into the sentance ... ati. It could be a cunning ruse to shy you away from the fact nvidia may have something new floating about
http://www.xtremesystems.org/forums/...ad.php?t=54149
the plot thickens :D
Relax Rancid, Nvidia is ways off from making any breakthroughs until 2006 and that isnt any news to get excited about lol. I mean, we didnt hear the confirmed X800 specs at least a month right before it was launch, you shouldnt waste too much enthusiasm on it, the plot will only thicken in a year when Nvidia brings out a response to the R520. Until then, ATIs lead, at least in the pc market, may well be easily maintained.Quote:
the plot thickens
For all way know it could Nvidia's entry into the Multimedia graphics market, the cards like those from Matrox for HD TV on pc and such, would make sense considering the "G"70 nametag.
Perkam
huh? how so? r500 taped out before r520 afaik... at least thats what i read...Quote:
Originally Posted by zakelwe
Sort of like a downgrade R500 if you will, but yeah, the R500 on the Xbox II has to stand up to the NVXX (and pls dont someone be smart and tell me its the G70) cos Sony and Nvidia will have an extra year to work on them. Hence the R520s successor or update will be using R500 architecture. Makes sense to me.Quote:
huh? how so? r500 taped out before r520 afaik... at least thats what i read...
Perkam
Perkam
andy, but this doesnt make any sence... why is r420 not sm3.0? because ati had problems to design an sm3.0 gpu that performed well enough as its a hard thing to build an sm3.0 gpu from scratch.Quote:
Originally Posted by zakelwe
knowing this, does it make any sence for ati to build two ! different sm3.0 gpus? one for xbox and one for the pc market? they wont make that much money selling the license for the gpu to microsoft, so it wouldnt be worth it at all i think...
a ps3 doesnt have that much cpu power, its not really 9 cores, its 1 core + 8 fpu units...Quote:
Originally Posted by Napoleonic
wasnt that nvidia? :confused:Quote:
Originally Posted by perkam
ati loves surprises as far as i can remmeber... 8500 was a rabbit out of the hat, just like the 9700 and the 9800 and the x800 cards as far as i can remember. there were rumours about r420, but not much...
seeing it like this i doubt they will be playing with an r520... and they surely wouldnt allow people to post infos and results of it on the net before the release...
its not a multicore though... at least not in my opinion... its one core with a part of the core beeing split up and multiplicated, but it doesnt have several independant cores, hence its not a multi core processor... at least not by my definition :PQuote:
Originally Posted by DudeMiester
this would be the first time a pc chip would be a downgraded consolve chip in history... i doubt this is going to happen... :PQuote:
Originally Posted by perkam
the strengths of consoles are they get produced in masses and get produces cheap, thats also why r500 will be made in 65nm process afaik, to make them cheap enough, while r520 is already beeing made at 90nm afaik.
This is from a dutch site,Tweakers.net ,the came out with this news today,that it is the new X850 512mb that ATI introduched today,nice isnt it :D .
this is the one,another pic
So 256mb more of ram will give you 10,000 extra 3dmarks? I find this very doubtful
definately not 10k more marks by doubling memory gotte be more
extra ram wont help, period in todays games except maybe for 1600x1200 with aa/af in doom3 ultra quality
Thats going to matter someday though. Doom 3 on ultra at 1600x1200 aa/af will be equivalent to games in 3 years on medium or low. They need to make the step eventually.
actually according to ATI gains have been seen in Farcry and Doom 3 in resolutions as low as 1024Quote:
Originally Posted by Geforce4ti4200
The weekend will tell what card it is =p and i just bought 2 6800gt to run SLI YAY!!! :cussing: :brick: :grr: :soap: :rolleyes: typical me if it shows to be a x950xt :D so its bound to be the big bang card everyone is wondering about
Quote:
Originally Posted by Jackass
id like to see a review comparing a 512mb card vs. a 256mb with everything else being equal because the last time I saw one, it was a 128mb vs. 256mb 9800pro and there was ZERO difference except for 1600x1200 with aa/af where it was very small
was it doom or farcry?
"Gains" = .1 FPS :rolleyes:Quote:
Originally Posted by Jackass
I think there is deliberate smoke and mirrors coming from Ati, almost certainly macci, Opp and FUGGER will be benching bullhead and X850 512MB but also that 41k score posted on Driverheaven is not going to be from a machine which is watercooled on cpu only and that has the above card in it. Not unless they have the worlds best AMD in it that can do 3.2GHz+ on water. That water is not even chilled as far as I can tell.
I think that they may show r520 running at the event with big numbers ( like nvidia did with Sli but more than likely the details will emerge at Cebit).
Sayaa, I do not think R500 has taped out yet ?? It has been in developement for a long time because it was originally due to be r400 but then Ati decided they could win the speed battle over nvidia with a simpler based on r360 r420 instead. So r400 morphed into r500. Ati did not do Sm3.0 on r420 not because they could not but because they judged that 3.0 marketing would not be as big as having the fastest video chip.
What they probably found difficult was having the unified pixel and vertex shader units, if this is a difficult task it actually makes it more sensible to put it in a console where strict parameters on other hardware and software are in place. Then you can trickle the complexity into the desktop market vpu when you get it sorted .. hence r500 is more complex than r520, but r520 uses techniques learnt on the r500.
Of course Ati underestimated nvidia when it came to pure speed, hence their last minute introduction of X800XT PE and connected small production of same. Sli probably did not make them too happy as well. So now nvidia have got the speed and the marketing tickbox of Sm3.0. So in goes SM3.0+ from Ati and 90nm and bigger clock speeds, possibly more pipes and possibly multicore.
It's all a bit grey of course, my pet theories might be wide of the mark, fun to speculate though.
Regards
Andy
Dude, the GPU in Xbox was better than the competition at the time. It had 1 more pixel shader than the desktop gf3 i think. It was surpassed, but it was better at the time.
the only reason im going with that being an r520 chip or just a card based on ati is the backplate, looks very very similar to a backplate for an x850 no?
http://img183.exs.cx/img183/7528/yum23oy.jpg
http://img.hexus.net/v2/graphics_car...kplane_big.jpg
I doubt 256 mb of memory suddenly gives the X850 such power. Cos to me, that actually looks like a one-slot solution. And the R520 will have to be one for AMR possibility. Though considering they dont require the connector thingy sli has, the R520 CAN be dual slot but that wouldnt be a sign of newer tech but old tech maxed out ;)Quote:
the only reason im going with that being an r520 chip or just a card based on ati is the backplate, looks very very similar to a backplate for an x850 no?
Perkam
6800 ultra is dual slot.
Your point being ???Quote:
6800 ultra is dual slot.
Perkam