mh, why? this question isn't meant to give offence, i'm just curious.Quote:
Originally Posted by Ubermann
Printable View
mh, why? this question isn't meant to give offence, i'm just curious.Quote:
Originally Posted by Ubermann
It is?Quote:
Originally Posted by LOE
Quote:
Originally Posted by RaZz!
that's whyQuote:
Originally Posted by Ubermann
so can anyone give a reasonable expectation as to what frame jump we might see? +520mhz 1.1ns mem, +150mhz gpu. im talking maxed out everything @ 1600 res?? im thinking maybe another 10fps..?
I don't think so.Quote:
Originally Posted by HKPolice
As in mhz it will be a small seed bump. It will have 16 pipelines but will more ROPs from 16 to 48. This will up the proformans a lot. Just look at the RV530/X1600, its only has 4 pipelines but has 12 ROP and it beat the 8 pipeline card and can keep up with the 12 pipeline cards. The R580 with the info im getting is very shocking with a setup of 16 pipelines and 48 ROPs. This should make it about 2.5X the shader profomans of the R520 at the same core speed.
"RSX is G70, 90 nanometre tweaked "
http://www.theinquirer.net/?article=24445
"When it comes to G71 as a graphic chip, Nvidia will get that chip to insane speeds and we expect at least 650 to 700MHz for the cherry picked top of the range."
http://www.theinquirer.net/?article=27463
Just want to clear something up. ROPs are not shader ALUs. ROPs are rasterization operators, responsible for turning a rendered scene into pixels. It also performs anti-aliasing, Z compression, and color calculations. It's pretty much responsible for the pixel fillrate of a graphics card, which is important for helping determine performance scaling when increasing the resolution and applying AA (along with memory bandwidth).Quote:
Originally Posted by SnipingWaste
The rumor was that the R580 would have 16 texture units, 16 ROPs, but 48 shader ALUs. You're right that it's similar in approach to the X1600, with 4 texture units, 12 shader ALUs, and likely 4 ROPs (I don't have confirmation of that myself, it might be 8). In essence, while I'm sure the clockspeeds of the R580 will increase over the R520, major increases in the pixel and texture fillrates of the card won't occur.
Now what's funny about this configuration is that it will enable the R580 to roughly equal the G70's shader capacity on a per-cycle basis. As impressive as 48 ALUs sound, this is merely making up for lost ground.
NVIDIA's G70 architecture is capable of 10 shader ops per pipe, per cycle. Now muliplied by 24 pipelines, this equals 240 shader ops per cycle.
ATI's R520 architecture can do 5 shader ops per pipe, per cycle. Multiplied by 48, and you get 240 shader ops per cycle. The same as NVIDIA's.
On the other hand, one would expect the R580 to clock in at at least 700MHz on the core, while the G70 isn't getting any higher than 600MHz on 110nm. So this will give the R580 a shader performance advantage there, one that hasn't been seen since the R300/NV30 days.
" G70 has just more pipelines and a slight redesign of its already successful NV40 marchitecture."
"ATI on the other hand had a different plan. It wanted to redesign the chip and it spent a lot of time to redesign its memory controller."
http://www.theinquirer.net/?article=27456
Cyberca, your right about the ROP and ALUs. I need some coffe to wake up. It s 12 ALU units (3 full and mini ALU per pipeline).
Beyond3d has the specs on RV530 here.
http://www.beyond3d.com/misc/chipcom...r=Order&cname=
What I did here is the R580 will be like RV530 but 4 times the pipelines and ROPs and ALUs.
GTX performance scales linear up with higher clocks.Quote:
Originally Posted by Sneil
The new GTX at 580Mhz has a 35% higher clock than a stock 430Mhz GTX.
Combine this with the enormous bandwidth and the 512MB memory, you could expect framerates which are 30-40% higher than before.
It would totally destroy the X1800XT.
Off course this will only be the case in GPU limited scenarios, meaning high resolution with lots of AA and AF.
This really is the best move Nvidia has ever made. While ATi is struggling to get cards to the market, Nvidia is hitting them with a monster of a card and immediate availability.
lol it's funny because the only backup you deliver is TheInq...worthless..
G70 might have it's roots in the Geforce6 series...but 7800 series is a new beast....stop this BS...
People are still saying G70 is like nv47 or whatever...stop it, your only making a fool out of yourself....this card is gonna waste everything out there...new architecture or not. :nono:
It's not only the extra pipes you know ;)
if the GTX with 512Mb cna already reach thsi core speed at 110nm i wonder wich speed it could do after a die shrink to 90nm
Differences between the NV47/G70 and NV40..Quote:
Originally Posted by Tim
Second shader pipe, 128bit floating point precision, hardware support for transparency AA, 8 added pipelines, 2 added vertex shaders...
The G70 is the NV47, but the NV47/G70 is still quite a nice rehaul of the NV40 any way you look at it. Considering the speed/power of the chip in it's current state, and what it'll do with this upped clockspeed/mem speed, did they really need to make a completely new chip?
I never said I agreed 100% with the Inq....You know what is not true and what make sense so no need to make personnal attack.
If you have info to correct what you believe I said do it.
Learn to write in non hostile way you'll get a lot more respect imo.
And learn to respect other point of view....even if you feel uncomfortable ;-)
Maybe I should, but if you want to play ball be prepared to catch it...
Your the 1001st person to say that stupid stuff....at one point it just is enough...and that was when you made your posts.
I respect your opinion...I just had enough off all that whining that G70 is just a speedbump whatever...even if there is a 580core clocked card right under their nose, people will still just say out loud...oh it's just an overclocked speedbumped card...it's ridiculous! :stick:
btw...I have nothing personal against you, but I just had enough of that ATI fanboy talk..
There has been very little ATI fanboy talk on this forum.
Eh, there's been talk for both sides, but can we PLEASE keep this on topic guys?
I'd really hate to see yet another good thread closed due to flaming...
I don't think of the G70 as a speed bump. I think of it as more like a 'refinement' on the existing design.
The NV40 was good, but it had some flaws. It was a power hog, and it didn't clock very efficiently. The WMP acceleration was broken, and there was never any PCIe support (the bridge chip just made things worse). Performance was decent, but it still suffered from slight ineffiencies, particularly with the vertex engine, some scheduling issues, and overall latency. The G70 smoothed over these rough edges, improved shader efficiency, power efficiency, lowered latencies, and provided native PCIe support. Plus none of its features were broken, unlike the NV40 which rendered a few million transistors useless.
The G70 also doesn't use shader replacement, like the NV40 mildly did, and the NV3x was TERRIBLE about. The NV40 still produced the proper image, it just would render them using different shaders than called for in some cases.
The G70 is anything but a speed bump, it's like cybercat said, a refinement. Much like the x800 was a refinement of the 9800.
wish i had the money to buy it. That card cost as much as my computer.
I'm a bit foggy on what shader replacement is. Is it where NVIDIA uses the driver to supplement certain shader programs for smaller, lower-precision ones?
No, NVidia used a special compiler for the NV3x and NV4x, it's kind of like the order of operations in math. IIRC When they render a scene, they set up an order inwhich everything is rendered. It's not lower precision, it's more-so if an effect can be done indentically with a shader that runs faster for the architecture it uses that instead... I think it was the XBitLabs review of the 7800gtx that explained it completely.
Either way, it rendered an identical image to it's ATi counterpart, it just did it a different way.
Yes, that's why I don't like my 6800GT and why I am using the avatar I'm using. I've had both ATI and Nvidia.Quote:
Originally Posted by Cybercat
Now let's stay ontopic guys :)
http://www.anandtech.com/video/showdoc.aspx?i=2451&p=5
It was anandtech.
Also, ATi use forms of shader replacement as well, even John Carmack has noted this in the Doom 3 benchmarks at [h]ardocp.
The G70 is the first card since the ti4200 *not* to use any form of shader replacement.