http://www.theinquirer.net/?article=14373
the geforce fx is 4x2 (can work in 8x1 as well but its more efficient in 4x2) i bet ati went for 8x2 or 16x1 as well...
Printable View
http://www.theinquirer.net/?article=14373
the geforce fx is 4x2 (can work in 8x1 as well but its more efficient in 4x2) i bet ati went for 8x2 or 16x1 as well...
R420 is supposed to be 8x1, right? If R420 is 8x1 and NV40 is 16x1 i think NV40 will be faster (of course, that's far from the only thing that matters).
Edit: Read it now, the Inq guy thinks the same.
Good lord...
wow, the nv40 should be a beast
I guess they really want to be best again in this "round".
Hope nVidia's nV40 is the real deal. Been waiting for a real benching board since GF4.
:slobber:
I want!!!!
:toast:
I can't wait for these new cards. I heard they're out end of Spring, Early Summer (April, May) or is it a Q3 release???
Strange...no post from ace yet? :D
Sure looks good, will those be PCI express or AGP8X?
who said r420=8x1? thats what atis current architecture is!
r420 will probably be 8x2 or 16x1.
umm yeah... the NV30 was going to own the R300 too.
The leaked specifications says that.Quote:
Originally posted by saaya
who said r420=8x1? thats what atis current architecture is!
r420 will probably be 8x2 or 16x1.
Well if my copy of Doom 3 comes with the Nvidia coupon, looks like FX5900U for me, cause this'll drive prices way down.
Personally I am not too impressed by ATi, high FPS, but not nearly as smooth. My friends Ti4200 64MB at stock runs UT2k3 and COD smoother, and its 10-15FPS slower than my 9600XT.
I'll see how good the coupon is then decide to go FX5900U, FX595U, or NV40.
~Bob
:hehe:Quote:
Originally posted by SupaMan
umm yeah... the NV30 was going to own the R300 too.
I thought the R420 will have 12x1 parallel pipelines.
hope they can figure out DX9 this round too ... lol
That was a rumour.Quote:
Originally posted by Kanavit
I thought the R420 will have 12x1 parallel pipelines.
The specs say 8x1, and they said 8x2 for the NV40, but if this is true its going to be a whole lot faster than that...
I'm pretty sure the 9800 series is the smoothest card out there right now...Quote:
Originally posted by khellandros66
Well if my copy of Doom 3 comes with the Nvidia coupon, looks like FX5900U for me, cause this'll drive prices way down.
Personally I am not too impressed by ATi, high FPS, but not nearly as smooth. My friends Ti4200 64MB at stock runs UT2k3 and COD smoother, and its 10-15FPS slower than my 9600XT.
I'll see how good the coupon is then decide to go FX5900U, FX595U, or NV40.
~Bob
Yeah my new 9800 pro looks smooth... maybe your settings are on performance and your friends is on graphic settings.
again? I guess that means you'd have to go back what, a couple of years for the last time?Quote:
Originally posted by Endre
I guess they really want to be best again in this "round".
GF4 Ti days were the last they dominated.Quote:
Originally posted by thirdeye
again? I guess that means you'd have to go back what, a couple of years for the last time?
And they ruled from there all the way back to when they introduced the first GeForce...
you mean when they introduced TNT2 which destroyed Voodoo3.
Rumors leave ATI R420 @ 8x6 architecture. nVidia NV40 @ 16x1 for now, but might be 16x2 or higher.
Many think ATI, after reading that NV40 will use 16x1, may also have 16 pixel pipelines along with it's already official 6 vertex pipes. Guess we'll have to just wait and see. Perhaps ATI is saving 16x6 for the R450?
Well that would be a first as ATI has always used a x1 architecture I believe.
Where did you read that? I highly doubt anyones using anything more than x2.
*sigh* I cannot wait for these cards, i have the money ready, time is the only obsticle left.
Funny thing is I don't normally rag on the Inquirer like a lot of people do but I used to have (and may still) an E-Mail from Fuad back before the FX5800 (the hair dryer card) came out. In it he said that he guaranteed that it would be at least double the speed of the 9700 PRO.
I'm still laughing at that one :)
Warden
Inquirer reported last month that ATI had already started shipping R420 silicon to branders, which would indicate that the architecture should already be set...
nVidia and ATI will probably debut the cards at CeBIT in mid-March, so hopefully they will hit stores sometime in April.
I read all this and it's nice and all, but I can't help but have the follwoing questions:Quote:
Originally posted by shrae
Inquirer reported last month that ATI had already started shipping R420 silicon to branders, which would indicate that the architecture should already be set...
nVidia and ATI will probably debut the cards at CeBIT in mid-March, so hopefully they will hit stores sometime in April.
Will a good Socket 939 AMD board be out by then and will it have PCIX?
How long will it take for PCIX to be refined?
Just how soon is PCIX gonna hit?
What the heck is up with DDRII?
Will the NV40 and R420 be both AGP and PCIX, and will I sacrifice performance or force myself into obsolesence by going with an AGP one?
Thats alot of questions that we don't have firm answers for. I'm thinking my current rig is gonna be with me until some firm standards get implemented in released hardware.
Honestly, I think this is just about the worst year yet for hardware format changes. Too many at one time.
we will have to see how good yields ati and nvidia will be able to squeeze out of the 130nm process. even though nvidia has some expirience with complex 130nm design in mass production already, i think atis decision to keep the number of transistors at "only" 175m is smarter.
with a 210m transistor chip you get around 200 gpus per 300mm wafer if the yield is 100% while a 175 m transistor chip will result in 250 chips as a 100% yield.
nvidias gpu design is more complex, but they have a little more expierience with 130nm process so id say those + and - weight each other up. so lets say ati and nvidia get the same yield of 70%, then nvidia ends up with 140 functional gpus while ati gets 175. ati will pay ~20% less for each gpu and can sell them cheaper.
210m transistors is a LOT, since nvidia already had problems with power leakage nv40 will probably need a lot more juice than r420 wich means a more complex and expensive pcb+ more complex and expensive power voltage circuicity and since it will leak a lot of power it will run very hot.
basically it sounds like nvidia is about to make the same mistake they made with nv30. too complex and thereby too expensive.
only that this time their card might be the performence king. but i think they overestimate the effect of having the fastest card on the market on the sells of their mainstream products.
after all atis success in the last year was more related to the great price performence ratio of their mainstream cards and not the 9800xt beeing the fastest card out. its the nv36 aka 5700 that saved nvidia from loosing further market share and money, not the nv35, the performence king of the nvidia cards.
nv36 is a diferent design, a cut down version of the nv35, it has less transistors =smaller =higher yield =cheaper. whil ati keeps the transistor count small and can afford to sell the same sized cores in performence and mainstream products nvidia will again have to redesign the gpu to be able to introduce a mainstream card with good price/performence ratio.
what is your guess on the next gen card performence btw? i guess a r420/nv40 will pull around 27-28K stock in 2k1 with an a64 3200+ at stock speeds, cant wait to go to the cebit and see those cards :D
Lets not forget the source... Jut because the Inq. says its 16x1 does NOT at all mean its true. :D
Wait for banchies, and Image quality comparisons before buying.
Nvidia, is quite well known for their lying lately, to boost sales.
"what is your guess on the next gen card performence btw? i guess a r420/nv40 will pull around 27-28K stock in 2k1 with an a64 3200+ at stock speeds, cant wait to go to the cebit and see those cards"
know whats funny and sad? I get flamed for those predictions and guesses, they seem to like toying with me, how immature! anyway thats a bit high for stock, seeing stock on a 9800xt is not even 21k! we may be seeing stock scores of more like 24k or so with the r420. socket 939 and 3700+ may be more like 27k stock however and low 30k overclocked
Quote:
Originally posted by Hollywood
I read all this and it's nice and all, but I can't help but have the follwoing questions:
Will a good Socket 939 AMD board be out by then and will it have PCIX?
How long will it take for PCIX to be refined?
Just how soon is PCIX gonna hit?
What the heck is up with DDRII?
Will the NV40 and R420 be both AGP and PCIX, and will I sacrifice performance or force myself into obsolesence by going with an AGP one?
Thats alot of questions that we don't have firm answers for. I'm thinking my current rig is gonna be with me until some firm standards get implemented in released hardware.
Honestly, I think this is just about the worst year yet for hardware format changes. Too many at one time.
yes and no, only limited quantities:(Quote:
Will a good Socket 939 AMD board be out by then and will it have PCIX?
vias 939 chipset supports both, agp and pci-x and there will even be boards with BOTH slots :)
atis cards support both agp and pci-x natively. nvidias cards will have a bridge chip on the card that translates agp to pci-x (bottleneck, but agp rates dont really matter anyways) nv45 will have native pci-x support.Quote:
How long will it take for PCIX to be refined?
most next gen mobos will be pci-x, like 80%+Quote:
Just how soon is PCIX gonna hit?
intel delayed the intro of ddr2 boards because of too high prices for the memory and "performence issues" because of the bad timings. while intels next gen chipsets support ddr1 and ddr2 most mobo manufacturers didnt want to mess with pcb designs and just worked on ddr2 designs. now intel wants them to release ddr1 boards because ddr2 performs too bad and costs too much, so they delayed the intro of ddr2 and their new chipsets.Quote:
What the heck is up with DDRII?
nv40=native agp r420=native agp later nvidia will start to sell nv40 cards with the agp->pci-x bridge chips to make them pci compatible and ati will release its r423 with native pci-x support. agp rates dont really mean anything, the diference in performence will be 1% or less i guess.Quote:
Will the NV40 and R420 be both AGP and PCIX, and will I sacrifice performance or force myself into obsolesence by going with an AGP one?
:eleph: :eleph: :YIPPIE: :eleph: :eleph: ...:rolleyes:Quote:
Originally posted by EnJoY
Rumors leave ATI R420 @ 8x6 architecture. nVidia NV40 @ 16x1 for now, but might be 16x2 or higher.
heheh, yeah i still remeber the 8x1 architecture discussion :rolleyes:Quote:
Originally posted by retrospooty
Lets not forget the source... Jut because the Inq. says its 16x1 does NOT at all mean its true. :D
Wait for banchies, and Image quality comparisons before buying.
Nvidia, is quite well known for their lying lately, to boost sales.
Does anybody ACTUALLY know that its PCI-EXPRESS and not PCIX, PCIX is an old standard and is on hundreds upon hundreds of workstation motherboards....its PCIe .... GET USED TO SAYING IT PEOPLE
PCIe - PCIe -PCIe - PCIe -PCIe - PCIe -PCIe - PCIe -PCIe - PCIe -PCIe - PCIe -PCIe - PCIe get it?
Bit of a dumb question here...16x1 pipelines...how is that different from 8x2? Is that 16 single pipelines vs. 8 parallel lines(ie one for data going and one for coming)? Or what?
last i heard was nv40 was going to be 8x1 and 16x0 in some situations
didn't fuad say his 9800xt with a 505core was the fastest 9800 out there?
wildcard, i know :D but pci-x(press) sounds better than pci-e :D
first number is the pipelines second the number of texture units
fastest stock... but even that is not true i guess, im sure some dude dripped his xt into ln2 without a vmod and got a higher oc.Quote:
Originally posted by b0bd0le
didn't fuad say his 9800xt with a 505core was the fastest 9800 out there?
No, no no....Quote:
Originally posted by WildKard
Does anybody ACTUALLY know that its PCI-EXPRESS and not PCIX, PCIX is an old standard and is on hundreds upon hundreds of workstation motherboards....its PCIe .... GET USED TO SAYING IT PEOPLE
PCIe - PCIe -PCIe - PCIe -PCIe - PCIe -PCIe - PCIe -PCIe - PCIe -PCIe - PCIe -PCIe - PCIe get it?
PCX, PCX, PCX.....get it?
I think I'd rather wait till I see the benchmarks than speculate on the spec. All I'm worried about is whether Futuremarks credibility is tarnished any further when the losing manufacturer starts trying to fudge the figures in 3Dmark2005.
hm, 16x1 sounds reaaaally good, but you never know what ATI meant by saying the r420 sports "extreme" pipelines :D
Cool info (seen it before) and all, but why on earth is this in Xtreme Overclocking?
With 5000+ posts I'd know where to put a thread. :rolleyes:
as cool as ati reacted to the release of the nv40 specs they seem to have at least an 8x2 design.
Yeah. Cool on the outside. Inward at ATI it's totalt panic :D
you were doing it two months ago, when there was no information out, and in every thread. Its a little more appropriate now, but I still must say its damn near impossible to predict.. just wait for benchies.. remember nv30.Quote:
Originally posted by Geforce4ti4200
"what is your guess on the next gen card performence btw? i guess a r420/nv40 will pull around 27-28K stock in 2k1 with an a64 3200+ at stock speeds, cant wait to go to the cebit and see those cards"
know whats funny and sad? I get flamed for those predictions and guesses, they seem to like toying with me, how immature! anyway thats a bit high for stock, seeing stock on a 9800xt is not even 21k! we may be seeing stock scores of more like 24k or so with the r420. socket 939 and 3700+ may be more like 27k stock however and low 30k overclocked
the way I see it is this should be the last AGP upgrade before the obligatory x16 upgrade, all those DDR2 questions wont be resolved when these cards drop anyway, so why not settle for a much faster AGP card you'll probably use for a while.Quote:
Originally posted by Hollywood
Will a good Socket 939 AMD board be out by then and will it have PCIX?
How long will it take for PCIX to be refined?
Just how soon is PCIX gonna hit?
What the heck is up with DDRII?
I do know that the 423 is x16 version of it, and is a native solution...
I know the difference...PCIX is easier to type. I use the X for Express like we use the X for Xtreme.Quote:
Originally posted by WildKard
Does anybody ACTUALLY know that its PCI-EXPRESS and not PCIX, PCIX is an old standard and is on hundreds upon hundreds of workstation motherboards....its PCIe .... GET USED TO SAYING IT PEOPLE
PCIe - PCIe -PCIe - PCIe -PCIe - PCIe -PCIe - PCIe -PCIe - PCIe -PCIe - PCIe -PCIe - PCIe get it?
PCI-X was a server and workstation-based solution for higher data bandwidth while retaining full backward PCI compatability. However...since it's deader than JFK, when someone is talking about PCIX in Feb of 2004, we generally get what they're talking about. ;)
Hence the term PCX I read somewhere....
Ironically enough I read about the architecture/pipelines from The Inquirer and Xbit Labs.
Why don't the companies ever get cool on us and simply dub an technology by a name. Why does everything have to be an acronym anyhow??
Simply call it "Express."
:-\ have to disagree, a newPCI-X is coming, along with PCIe, so interchanging the two still creates confusion.Quote:
Originally posted by Hollywood
I know the difference...PCIX is easier to type. I use the X for Express like we use the X for Xtreme.
PCI-X was a server and workstation-based solution for higher data bandwidth while retaining full backward PCI compatability. However...since it's deader than JFK, when someone is talking about PCIX in Feb of 2004, we generally get what they're talking about. ;)
Because then you can't have cool trivia contests with acronyms no one knowsQuote:
Originally posted by Hollywood
Why don't the companies ever get cool on us and simply dub an technology by a name. Why does everything have to be an acronym anyhow??
Simply call it "Express."
Peripheral controller interconnect
complementary metal oxide semiconductor
Static random access memory
Advanced technology attachment packet interface
integrated drive electronics
...ok, time to quit, i'm having too much fun
hahahaQuote:
Originally posted by SupaMan
Because then you can't have cool trivia contests with acronyms no one knows
Peripheral controller interconnect
complementary metal oxide semiconductor
Static random access memory
Advanced technology attachment packet interface
integrated drive electronics
...ok, time to quit, i'm having too much fun
yeah right :rolleyes: nvidia doesnt even a have single sample of the card, only pre production stuff and a nice gpu design in THEORY! nv30 was a great card on the blue prints as well but it turned out to be a big flop....Quote:
Originally posted by Veritas.no
Yeah. Cool on the outside. Inward at ATI it's totalt panic :D
Yup... I remember the NV30 was supposed to rule the world... Six months delayed, and it totally stank... I don't beleive anything Nvidia says.
Wait for benchies. Thats it, and thats all :D
I agree I will personally wait for the benchmarks. Of real games not 3dmark. You cannot play 3dmark only run it :)
I am particulary interested in the X2 scores as i am a flight sim junky and would love to run that game at max resolution with 6xAA and 16xAnio. Wich i believe no one can run that game at and keep their sanity.
I haven't been impressed with NVidia at all for a long while, and I doubt the NV40 is going to change that. We'll see, though.. Maybe they can catch up with ATi in image quality, who knows.
So..I hope that nVidia will return to itself the confidence, which was shaken after NV30. I hope also that my following upgrade will become precisely NV40. I always fed pleasant weakness for nVidia, however an even larger weakness I feed to 3DMark score;) and therefore I will wait for the benchmark's results.
I'm happy with the R350 I have now...
It's real hard to predict what scores it will get...
I scored around 20.5k with mine, might get around 22k...
I really doubt the stock score will be anywhere higher than 25k, if already in that area...
Unless they start 16x2 or so something, it will just be faster, but not a whole world of difference...
NV40 is only going to be good in 3D01 if they release another driver that will run it well.
Otherwise, if we can't force 44.03, it will be significantly behind the Radeon in 3D01 (which, with the rediculous scores we are getting should be phased out soon).
maybe you should drop by opp or macci's labs ;)Quote:
Originally posted by WerewolfX
I agree I will personally wait for the benchmarks. Of real games not 3dmark. You cannot play 3dmark only run it :)
I am particulary interested in the X2 scores as i am a flight sim junky and would love to run that game at max resolution with 6xAA and 16xAnio. Wich i believe no one can run that game at and keep their sanity.
So how are they going to cool NV40?
http://www.ocfaq.com/files/0228_777_engine_front.jpegQuote:
Originally posted by IvanAndreevich
So how are they going to cool NV40?
harhar
SupaMan
..taking up ALL the PCI slots?
OT: The V3 3500 destroyed the TNT2U in FPS in nearly every game. :D Its 16Bit quality was unattainable too. ;)Quote:
Originally posted by QuadDamage
you mean when they introduced TNT2 which destroyed Voodoo3.
/OT
What are you smoking? I want some. V3 still surpassed TNT2. It wasn't until Geforce that nvidia finally really took the crown.Quote:
Originally posted by QuadDamage
you mean when they introduced TNT2 which destroyed Voodoo3.
the tnt2 was a better card in anything but glide, the one thing that made 3dfx so famous. tnt2 could run 32 bit color and use larger textures. voodoo3 didnt even properly support Ogl
Quote:
ROFL :ROTF:
Man...I still remember when I had the most killer graphics setup EVER. A Diamond Viper V770 AGP TNT2 with twin Voodoo Monster II cards hooked together in SLI mode. 3 vidcards in one machine.
why can't they make cards so that you can hook them up like that any more?
... 3*Radeon 9800XT :slobber:
With PCI-EXPRESS (happy now Wildcard??? ;) ) It will be possible again. Honestly, I would love the idea of being able to hook up graphics cards in parallel. Of course...that would make the benching game a freaking nightmare and freakishly expensive. Benching rigs with all 6 PCI-EXPRESS slots filled with R450s all hooked together would just stomp all of us "poorer" folk.Quote:
Originally posted by Total Immortal
why can't they make cards so that you can hook them up like that any more?
... 3*Radeon 9800XT :slobber:
Mmmm...kinda gets me all tingly just thinking about it though!!!
radeons can be hooked up ;) there are 4 way radeon 9700pro cards, but they have a custom interface...
in theory you can hook up 256 r300/350/360 gpus :D
i could have sworn some company tried a dual core R300 but it refused to post.
does this mean I could buy six cheap pci-e cards and run em all in parrallel?
Sapphire built a Dual-core R300, but they didnt sell it because it was too expensive to produce or something...
I've seen pics of a Four core 9800 based card that used the AGP slot and one PCI slot, but I don't have any details on it.Quote:
Originally posted by Slickthellama
i could have sworn some company tried a dual core R300 but it refused to post.
nope, it wouldnt post.Quote:
Originally posted by STEvil
Sapphire built a Dual-core R300, but they didnt sell it because it was too expensive to produce or something...
hercules built that card, and they had synchronization probs and they needec 8 layer pcb i think... very expensive!
dual gpu cards are very uneffective, usually a 2gpu card is only 25% faster than a single gpu card with the same chip...
not necessarily, the VSA-100 was exactly twice as fast with 2 chips over 1, and twice as fast again with 4 over 2. sure if you put two chips not designed for it together your not going to get optimal results, but if they are designed with that in mind it can work very well, as shown by 3dfx.
dont think we will see dual chips for a while, first single chip would need to be maxed out, second we may just run in parrallel with more than 1 pci-e slot. imagine! those with the most money would get the best performance. run 5 of the $500 cards
Well....I'm not sure if you know this. But SLI stood for Scan Line Interleave. The first card processed the odd lines and the second processed the even ones...I suppose if you had more than two they would be auto-assigned a range to process...but this is neither here nor there...as it will probably never happen.Quote:
Originally posted by Geforce4ti4200
dont think we will see dual chips for a while, first single chip would need to be maxed out, second we may just run in parrallel with more than 1 pci-e slot. imagine! those with the most money would get the best performance. run 5 of the $500 cards
hmm, so a serious bencher would need what.. 6x2 cascade stages for the gpus??! :D
Put away your poppers cos they mess up with your head. TNT2 Ultra 32MB slower than 16megger V3. please, i had both and you stop telling me V3 was faster.Quote:
Originally posted by Kalway
What are you smoking? I want some. V3 still surpassed TNT2. It wasn't until Geforce that nvidia finally really took the crown.
I had a tnt2 vanta 8mb and a voodoo3 2000 8mb and the tnt2 vanta was faster in low res and just a bit slower in high res. a 64 bit vanta almost as fast as a 128 bit voodoo3, thats just sad. In low res, memory bandwith isnt very important so my vanta owned the voodoo3
My own memory tells me this..
The V3 was faster than the TNT2... [or was it that a dual V2 setup in SLI was faster?]
There's a card after the TNT2 I havent seen anyone mention..
The Ultra TNT2! That was the card that definitely took the speed crown away. Pretty sweet card during it's time.
GeForce was the next blow that shattered 3dfx.
no you got it wrong:
tnt>voodoo2
then voodoo3>tnt
next tnt2>voodoo3
the geforces came out and 3dfx's vooodoo5 did pretty good against that, but the geforce had TL and was alot more futureproof