mh, why? this question isn't meant to give offence, i'm just curious.Quote:
Originally Posted by Ubermann
Printable View
mh, why? this question isn't meant to give offence, i'm just curious.Quote:
Originally Posted by Ubermann
It is?Quote:
Originally Posted by LOE
Quote:
Originally Posted by RaZz!
that's whyQuote:
Originally Posted by Ubermann
so can anyone give a reasonable expectation as to what frame jump we might see? +520mhz 1.1ns mem, +150mhz gpu. im talking maxed out everything @ 1600 res?? im thinking maybe another 10fps..?
I don't think so.Quote:
Originally Posted by HKPolice
As in mhz it will be a small seed bump. It will have 16 pipelines but will more ROPs from 16 to 48. This will up the proformans a lot. Just look at the RV530/X1600, its only has 4 pipelines but has 12 ROP and it beat the 8 pipeline card and can keep up with the 12 pipeline cards. The R580 with the info im getting is very shocking with a setup of 16 pipelines and 48 ROPs. This should make it about 2.5X the shader profomans of the R520 at the same core speed.
"RSX is G70, 90 nanometre tweaked "
http://www.theinquirer.net/?article=24445
"When it comes to G71 as a graphic chip, Nvidia will get that chip to insane speeds and we expect at least 650 to 700MHz for the cherry picked top of the range."
http://www.theinquirer.net/?article=27463
Just want to clear something up. ROPs are not shader ALUs. ROPs are rasterization operators, responsible for turning a rendered scene into pixels. It also performs anti-aliasing, Z compression, and color calculations. It's pretty much responsible for the pixel fillrate of a graphics card, which is important for helping determine performance scaling when increasing the resolution and applying AA (along with memory bandwidth).Quote:
Originally Posted by SnipingWaste
The rumor was that the R580 would have 16 texture units, 16 ROPs, but 48 shader ALUs. You're right that it's similar in approach to the X1600, with 4 texture units, 12 shader ALUs, and likely 4 ROPs (I don't have confirmation of that myself, it might be 8). In essence, while I'm sure the clockspeeds of the R580 will increase over the R520, major increases in the pixel and texture fillrates of the card won't occur.
Now what's funny about this configuration is that it will enable the R580 to roughly equal the G70's shader capacity on a per-cycle basis. As impressive as 48 ALUs sound, this is merely making up for lost ground.
NVIDIA's G70 architecture is capable of 10 shader ops per pipe, per cycle. Now muliplied by 24 pipelines, this equals 240 shader ops per cycle.
ATI's R520 architecture can do 5 shader ops per pipe, per cycle. Multiplied by 48, and you get 240 shader ops per cycle. The same as NVIDIA's.
On the other hand, one would expect the R580 to clock in at at least 700MHz on the core, while the G70 isn't getting any higher than 600MHz on 110nm. So this will give the R580 a shader performance advantage there, one that hasn't been seen since the R300/NV30 days.
" G70 has just more pipelines and a slight redesign of its already successful NV40 marchitecture."
"ATI on the other hand had a different plan. It wanted to redesign the chip and it spent a lot of time to redesign its memory controller."
http://www.theinquirer.net/?article=27456
Cyberca, your right about the ROP and ALUs. I need some coffe to wake up. It s 12 ALU units (3 full and mini ALU per pipeline).
Beyond3d has the specs on RV530 here.
http://www.beyond3d.com/misc/chipcom...r=Order&cname=
What I did here is the R580 will be like RV530 but 4 times the pipelines and ROPs and ALUs.
GTX performance scales linear up with higher clocks.Quote:
Originally Posted by Sneil
The new GTX at 580Mhz has a 35% higher clock than a stock 430Mhz GTX.
Combine this with the enormous bandwidth and the 512MB memory, you could expect framerates which are 30-40% higher than before.
It would totally destroy the X1800XT.
Off course this will only be the case in GPU limited scenarios, meaning high resolution with lots of AA and AF.
This really is the best move Nvidia has ever made. While ATi is struggling to get cards to the market, Nvidia is hitting them with a monster of a card and immediate availability.
lol it's funny because the only backup you deliver is TheInq...worthless..
G70 might have it's roots in the Geforce6 series...but 7800 series is a new beast....stop this BS...
People are still saying G70 is like nv47 or whatever...stop it, your only making a fool out of yourself....this card is gonna waste everything out there...new architecture or not. :nono:
It's not only the extra pipes you know ;)
if the GTX with 512Mb cna already reach thsi core speed at 110nm i wonder wich speed it could do after a die shrink to 90nm
Differences between the NV47/G70 and NV40..Quote:
Originally Posted by Tim
Second shader pipe, 128bit floating point precision, hardware support for transparency AA, 8 added pipelines, 2 added vertex shaders...
The G70 is the NV47, but the NV47/G70 is still quite a nice rehaul of the NV40 any way you look at it. Considering the speed/power of the chip in it's current state, and what it'll do with this upped clockspeed/mem speed, did they really need to make a completely new chip?
I never said I agreed 100% with the Inq....You know what is not true and what make sense so no need to make personnal attack.
If you have info to correct what you believe I said do it.
Learn to write in non hostile way you'll get a lot more respect imo.
And learn to respect other point of view....even if you feel uncomfortable ;-)
Maybe I should, but if you want to play ball be prepared to catch it...
Your the 1001st person to say that stupid stuff....at one point it just is enough...and that was when you made your posts.
I respect your opinion...I just had enough off all that whining that G70 is just a speedbump whatever...even if there is a 580core clocked card right under their nose, people will still just say out loud...oh it's just an overclocked speedbumped card...it's ridiculous! :stick:
btw...I have nothing personal against you, but I just had enough of that ATI fanboy talk..
There has been very little ATI fanboy talk on this forum.
Eh, there's been talk for both sides, but can we PLEASE keep this on topic guys?
I'd really hate to see yet another good thread closed due to flaming...
I don't think of the G70 as a speed bump. I think of it as more like a 'refinement' on the existing design.
The NV40 was good, but it had some flaws. It was a power hog, and it didn't clock very efficiently. The WMP acceleration was broken, and there was never any PCIe support (the bridge chip just made things worse). Performance was decent, but it still suffered from slight ineffiencies, particularly with the vertex engine, some scheduling issues, and overall latency. The G70 smoothed over these rough edges, improved shader efficiency, power efficiency, lowered latencies, and provided native PCIe support. Plus none of its features were broken, unlike the NV40 which rendered a few million transistors useless.
The G70 also doesn't use shader replacement, like the NV40 mildly did, and the NV3x was TERRIBLE about. The NV40 still produced the proper image, it just would render them using different shaders than called for in some cases.
The G70 is anything but a speed bump, it's like cybercat said, a refinement. Much like the x800 was a refinement of the 9800.
wish i had the money to buy it. That card cost as much as my computer.
I'm a bit foggy on what shader replacement is. Is it where NVIDIA uses the driver to supplement certain shader programs for smaller, lower-precision ones?
No, NVidia used a special compiler for the NV3x and NV4x, it's kind of like the order of operations in math. IIRC When they render a scene, they set up an order inwhich everything is rendered. It's not lower precision, it's more-so if an effect can be done indentically with a shader that runs faster for the architecture it uses that instead... I think it was the XBitLabs review of the 7800gtx that explained it completely.
Either way, it rendered an identical image to it's ATi counterpart, it just did it a different way.
Yes, that's why I don't like my 6800GT and why I am using the avatar I'm using. I've had both ATI and Nvidia.Quote:
Originally Posted by Cybercat
Now let's stay ontopic guys :)
http://www.anandtech.com/video/showdoc.aspx?i=2451&p=5
It was anandtech.
Also, ATi use forms of shader replacement as well, even John Carmack has noted this in the Doom 3 benchmarks at [h]ardocp.
The G70 is the first card since the ti4200 *not* to use any form of shader replacement.
I see, thanks.Quote:
Originally Posted by DilTech
Actually I was wrong, so your reasoning (and hate, for that matter) is invalid.Quote:
Originally Posted by alexio
And this is on topic. It's discussing the points and advantages of the G70 architecture. Why will this 512MB GTX be a beast? Because the G70 has a powerful architecture, and higher clockspeeds only amplify this.
I can tell very well if something is less-precise or not. Wetter it is because of replament or just because they are processed in a different order I don't know and I don't care actually but my eyes don't lie.Quote:
Originally Posted by Cybercat
I have two systems here that I put side by side once and ran some games with them, one using a 9800 pro and one using an 6800GT (NU @ GT actually, but this doesn't affect IQ) and it was clear with whatever driver I used the 9800 pro just looked better. I switched screens to see if the 6800GT looked better on another screen but that wasn't the case.
Before the Asus 6800NU I had an XFX 6800LE and I RMA'd it because it was broken, I had strange glitched. When I got the new card I still had the same problem and it seems that the XFX wasn't broken after all but what I say was just bad IQ. I kept it because it was my third card already (first XFX 6800LE was really broken) and at the moment I was happy that it unlocked to 16*1.6.
You're right, we just need to keep it friendly to prevent flames.Quote:
And this is on topic. It's discussing the points and advantages of the G70 architecture. Why will this 512MB GTX be a beast? Because the G70 has a powerful architecture, and higher clockspeeds only amplify this.
I wonder why it is I've never seen any major differences in IQ before in articles published by, say, HardOCP.
Do you have any specific examples? Games that you saw the most difference in? Maybe even particular scenes or places within those games that really showed a difference?
You know as well as I do that actual accounts go further than just generalizations.
XBit labs did a quality comparison of ATi and NVidia, zoomed in at up to 10x. There was NO visible difference in any game but painkiller, and in painkiller neither one rendered a "worse" image than the other, they just weren't identical.
They went thru about 5 or 6 different games.
Also, alexio, NVidia wasn't using lower precision shaders with the NV40, infact, for the NV40 it wasn't capable of going below 32bit FPP. Low and high were both set to the same point on that card!
THey need review material so they will never say a card is bad when there are no cries from the gaming masses about bad IQ.Quote:
Originally Posted by Cybercat
Any game that were you can see very far ahead of you and where the lines get thinner the farther to the horizon gives problems. These lines are unsharp and look like they are moving. Disaparing for a second and then comming back again.Quote:
Do you have any specific examples? Games that you saw the most difference in? Maybe even particular scenes or places within those games that really showed a difference?
In Half-Life two you can see glitches all the time. Textures look really weird. It's hard to discribe and there isn't really a scene in specific when this happens, it just happens all the time.
The biggest problems that I have is that lines are not sharp and tend to flikker. YOu can see this in many games and it irritates. Also fps and mouse lag are a problem. Avarage framerates are nice but fps consistancy isn't there.
I'm the kind of person that get's sick from gaming just like you can get sick in a car. With the 9800 pro I don't have this problem with the 6800NU I DO have this problem at a setting comparible with the settings used on the 9800 pro. V-Sync helps somewhat against this but still I get sick from the flikkering of the lines, and stutters still occur.
Chipset drivers and vga drivers, everything is installed as it should be, bechmark scores are good (3Dmark looks much worse at the 6800 by the way). I played with LOD-bias and this helps somewhat. Setting it to -3 helps because it blurs the screen (and thus the flikkering lines).
So you can see that it's more of a feeling than anything specific.
lol, so that's why the low precision trick for FarCry and HL2 doesn't work on GF6 cards, only FX cards.Quote:
Originally Posted by DilTech
EDIT: So Alexio, you could almost say that the reason no websites have really recorded a difference in IQ between screenshot comparisons, is because most of the IQ problems you experience are in motion, rather than the still images themselves produced, right?
Yes, that is correct.
IIRC, there were a couple of avis to demonstrate the flicker, however.
Yes that's correct, still images are the same as ATI still images. I think the problem might just be the fast that it always forces the high precision. Upping LOD bias will do the same on ATI cards. It looks like a line of pixels is jumping from one line on the screen to the line of pixels next and back. This problem gets worse if the distances are bigger to the point that they are only 1 pixel thick. This is what's causing the problems in racing games where the lines sometimes meet in infinaty and also in games like Serious Sam: The second Encounter where you can see very far in the open fieldsQuote:
Originally Posted by Cybercat
Well old games like SS:SE you should be able to run at least 4xAA and high enough resolutions to all but relieve that problem, I would think.
HL2 should be able to run at close to 1600x1200 on a 6800GT, and while that may not fix the issue entirely, it certainly wouldn't worsen it.
Try turning on the LOD Bias clamp alexio?
I didn't know this setting, where can I find it? In the driver or in Rivatuner?Quote:
Originally Posted by DilTech
I will try this if I get my other system to run again (no video for some reason, it's a motherboard issue it seems). The system I'm typing this from is a Shuttle SFF so I can't just put the card in, I need to change the PSU with a bigger one, etc.
It's a driver setting, in the same area as AA and AF. Turn on advanced options to see it.
whoa awesome...my graphics card feels sad :P
Guys, I'm just going to make one BOLD statement...
This card is *NOT* NVidia's big fish...
so what is? g80?
Ooooh cryptic... either this card will not be much faster, or there will be an even better card to come!
Oh, the card will still be obscenely fast, but...well.... You'll see what I mean.
I'm going to leave it at that for now, You'll see what I mean when the time comes.
nah G80 wil be unified architekture that one wont b around for a while.
then what will Nvidias bigfish b if its not this monster, or do u mean bigfish in: there wil b a other nvidia card thats best bang per buck?
ps. Diltech u cna always PM me the info if u just have to get rid of it
:banana: trust me i can keep a secret, could a dutch guy lie?
Quote:
Originally Posted by DilTech
as in months ? or a month
Just watch the news section...
For now I'll leave you with this...
This card is only the answer to the R520, the R580 has MUCH stiffer competition.
TheInq says 90nm G71 is coming soon, but by that time R580 will be out.
Somebody ought to tell ATi to first and foremost focus on their current series of graphic cards before they paper launch the next. Just to let us swallow this catastrophe [or "god-send" depending on what side you are] first.Quote:
Originally Posted by Shadowmage
:rolleyes:
-k0nsl
That's Q1 2006.
X1800XT is supposed to launch Nov 5th, yet there's availability on Nov 3rd.
Are you feeling all right?
whats the last generation? the gtx? but it uses 32bit shaders as well.Quote:
Originally Posted by LOE
what are you talking about? :confused:
If Nvidia releases yet another card "big fish" then why would someone pay those $600+ for this one Diltech ?
He said competition for the R580;so next year!
That could be in 5 months at least....
Of cuz there will be faster cards coming, but watching the news section for 5 month sounds boring =)
I thought he meant something coming soon..
Seems like there's gonna be a product launch from nVidia the 7th...
Something is being launched the 7th.....
http://www.scan.co.uk/
The green boxing gloves...
Round 1
Round 2
Round 3 ----> Knockout.
cool ad =)
and from here on the green goblin is the green glove
Some real green shops here:
http://www.vr-zone.com.sg/?i=2890
ATI has been playing catch up since the 6800 series.....nVidia has had the best position for a looong time now....and the whole of 2005 nVidia lead the market....delivering the first hardlaunch...they did it again with the GT....now it's gonna be three strikes and ATI is out....
I'm almost certain that nVidia will do a double launch this month....one at the 7th and one at the 14th......both hardlaunched.
Mid range and High end.
The 6800 PCIe has dissapeared....there is a large gap at the moment....the GS has been spotted in the 81.87 Beta drivers...so it's only fair to expect that card...it's gonna be a 16 pipe killer for a little more then $200.....excellent.
The 7600 is most likely to debut in December....probably 90nm.
If nVidia will pull off a Double Hardlaunch (4th hardlaunch in a row basically)....let's just say that they will have legendary status....
And they will do it again in January/Februari with it's G72....which will battle the R580.....nVidia just keeps pushing ATI...but ATI doesn't have the trump cards....they have every field covered...
That's my opinion.....
Did you guys see the new [H]ardOCP review on the XT? The difference between the BFG clocked at 460/1.3 and the XT is minimal in Quake4 even with the new Beta drivers (Actually the GTX still wins)....can you imagine what a spanking the 550/1800 card will do?
ATI get's knocked the Fuc|< out....
*edit* Just ignore it..
My point is nothing..keep on..
Oh lol sorry..... :)Quote:
Originally Posted by Ubermann
well nvidia was kicked hard when the 9700 and 5600 times...
the last gen was very competitive but this is owned by nvidia
It won't be 5 months...Quote:
Originally Posted by Ubermann
5 days? :DQuote:
Originally Posted by DilTech
Well all rumours point to Q1....R580 and the next nVidia card G7X :)
So should I upgrade this christmas (from 7800GTX 256 SLI to 512 SLI) or should I wait till Q1?
same here .. is it even worth buying a 512 gtx if another one is due out in q1 of 06 ?? or should i just get another 256 gtx to hold me over
Dont ask him if you should wait;ask him if he will wait :)
I'll tell you this much for now...
Quadros are being discontinued next round, as the gaming cards are to the point where they're as fast, if not FASTER, than the workstation cards.... Basically, that just means that there's no point in companies buying special cards anymore, and that studios will use the same cards to render things as we do to render games...
If what I was told yesterday is true(and it comes from someone who damn well should know!) then the r580 might as well not even be released....
There is nothing due out after the ULtra by either company until q2 2k6...fyi.
Rest of the stuff should be 7600/7200.
Perkam
Perkam, I'd retract that statement...both camps are ready for something before q2.
ATI will have to break its promise not to release the R580 before summer 2k6 that it made to its shareholders...and believe me, this isnt a company that can take its promises to its shareholders for granted anymore.Quote:
Originally Posted by DilTech
Perkam
Jeez, stop being so obscure.
Q1 2006 = 90nm G71 and R580
There, I said it.
Also, link to the shareholder's statement?
Use the search it's there... :)Quote:
Originally Posted by Shadowmage
Or :google:
Try the official ATi news thread.Quote:
Originally Posted by Shadowmage
Also, shadow, don't hold your breath on your estimate for Q1.
RV560/RV540/RV505 coming q1, R580 coming q2. All together now: "Are Oh Ay Dee, Emm Ay Pee !!!"Quote:
We do not expect ATI to launch its R580 (speculated to have 32 pixel pipelines) in C2005 (ATI does not want to stall the channel for the R520), and expect a refresh of the R500 family beginning in spring 2006 with RV560, followed by RV540 and RV505.
Perkam
Thats odd cuz some day ago ATI management said R580 will be out in the Feb/06 timeframe.Quote:
Originally Posted by DilTech
Also they said ATI already has in-house chips designed on the 80nm node, and expects to be first to market with 65nm.
But this is way off topic =)
well then post up what was said or dont post at all with ur rumors ..
why even post if all ur gonna say is .. if what i was told is true then the r580 shouldnt come out .. i guess ill start posting stuff like this :rolleyes:
Diltech could mena 2 things with bigfish
either Nvidia launches a mainstream card with 256bit or they r futher with the G80 then everyone thinks and its being launched early.
unified architekture :)
Quote:
Originally Posted by Ubermann
Link?
Also, it was ATi's ceo who stated that they cannot release the R580 early, I'd think he'd know better than anyone what they're doing.
Of course, you're right, this *IS* way off topic.
As for jet...be patient, first let me gather the :fact: .
i remember reading that the G80 design was complete and that this was the counter to the r580. Is this what you meant Dil?
http://www.techspot.com/news/18126-n...-approach.html
del.
Well, if the G70 indeed is the NV47 and this new G71 perhaps is NV48/NV50 or something... then it wouldn't be to odd if they released the real (original) G70 (now G80) some half a year after original NV47... not saying this has to be the case though.
Now, my current question is this...
Going by NVidia's usual name scheme, The cards have always followed a similiar pattern in core names.
NV10 = Geforce 256
NV15 = Geforce2
NV20 = Geforce3
NV25 = Geforce4Ti
NV30 = GeforceFX5800
NV35 = Geforce5900Ultra
NV40 = Geforce 6800U/NU/GT
NV45 = Geforce6800Ultra(PCI-E)
G70 = 7800GTX/GT/512mbGTX
Now, the high end's generally fell on the 0's and 5's, there's a few exceptions like the 5950Ultra(NV38, even though it was the same chip as the 5900). Therefore, why are people all assuming that the G71 and G72 are high end parts? Because the inq said so?...The Nxx1/2's generally were midrange/low-end cards, why would that all change now? For a mental note, the NV31 was the 5600, and the NV41 was the 6800nu pci-e, with the NV42 being a die-shrunk NV41(to 110nm). They've always been mainstreamish cards.
Now, this isn't part of the info I've recieved, it's just something I've since noticed on my own... I'm almost positive these sites have got it all wrong.
DilTech > Because, generally, nVidia's high end fell on the NVxx, not Gxx. Now that the very most fundamental part of the naming scheme has fallen, who knows what to expect? Anything's possible!
NV48 was supposed to be another exception btw. NV47 too. So the changing naming scheme started back in the NV3x days, was supposed to change even more in the NV4x days and now, finally, is completely off in the G7x present!
Uh, don't know what you're going on, but you only have to look in actual retail outlets to see the 6800 is a live and well.Quote:
Originally Posted by Tim
http://www.newegg.com/Product/Produc...ubmit=Property
He quoted this:
http://www.theinquirer.net/?article=27440
Well that article was wrong.
TheInq has the tendency to have stuff that Fuad assumes on impulse. :rolleyes: I've said it many times: packet of salt anyone ? :)Quote:
Originally Posted by Cybercat
Perkam
NV48 wasn't meant to be a high-end part. NV48 was the low-high end, much like the 6800nu was to the 6800ultra. NV47 was renamed to the G70, as the NV47 was a refresh to the NV40, nv45 was already being used for the 6800ultra pci-e. They're still on the same naming structure they were, just like with the FX5950, they used a non-5 jump because the 5 was already used and it the 5950U wasn't worth the NV40 name.Quote:
Originally Posted by slider99
Now going by our own common sense here, and what we've become use to from NVidia, one would assume the G71 is either the 7800GS or the 7600, with the G72 being the other. The 90nm G70(if there is one, at this point NVidia could just say why bother with what I'm being told), would then likely come in as the G75. The G80 being the next new architecture....
This has been how NVidia's naming system has always worked, read back thru their history slider, you'll see this "Gxx" is actually a throwback to their older generations.
Again I say it, I have a sinking feeling that everyone has been misled by the G71/G72 fiasco.
Finally, I still stand by my statement, the 7800GTX 512mb isn't the big fish in this pond.
The NV48 is the 512MB version of the 6800 Ultra. A quick search will tell you the same thing.Quote:
Originally Posted by DilTech
Also, either you're under NDA, or you just like knowing something nobody else does, and absorbing the spotlight from it. For right now, the 7800GTX 512MB is the biggest thing we've got, and if you'd like to dispell that, be my guest. Otherwise keep it to yourself before spreading around useless hype.
He just said it.."in less then 5 months" "The 90nm G70(if there is one, at this point NVidia could just say why bother with what I'm being told), would then likely come in as the G75. The G80 being the next new architecture...."
Something is wrong though it doesnt make sense....I may have missread something ;-)
so basically another 5-6 months until nvidia's next big card .. just seems like their normal cycle .. every 6 months .. still unsure of what to do
sell my current 7800gtx for a 512 or get another 256 and hold me over until the 90nm one is out :confused:
http://www.theregister.co.uk/2005/10...gforce_7800gs/Quote:
Originally Posted by Cybercat
Are you positive of that statement?
http://www.theinquirer.net/?article=21476Quote:
Originally Posted by DilTech
http://www.ngohq.com/showthread.php?t=3239
Unfortunately many websites can't get straight on that.
The first one is the inq, the second one is dated october 18th, 2005... FARRRR after the 512mb 6800ultra came out. I kind of doubt it'd take so long to surface in a driver, and even more-so doubt it'd need a whole new core to do the exact same thing but add another 256mb of ram.
Anyone got a picture of the gpu on the 6800ultra 512mb?
Also, Allow me to point something out cybercat. Every site that says the 512mb 6800Ultra is the NV48 all cite the inq's story as their source. Why would the 6800ultra need to be the NV48 to handle 512mb while the 512mb 6800gt stayed the same core?..
Agreed. Either back up your statements, or stop posting baseless rumors.Quote:
Originally Posted by Cybercat
EDIT: I'm also under NVIDIA NDA, but you don't see *ME* attention-whoring myself, do you?
October 18th is not the first time it was surfaced, they're just pointing it out. Also, they're not referencing the Inq, either. NGO had a news article earlier that said something along the lines of "interesting to see the NV48 hasn't died either". Then they updated it, saying the NV48 is the 6800 Ultra 512MB. Makes me think NVIDIA clued them in on this.Quote:
Originally Posted by DilTech
Getting a picture of the core of the 6800 Ultra won't help you either. They don't say what they are.
Also, the 6800GT 512MB also uses the NV48. It's even built on the same PCB. Funny how you assumed it to be different.
I do believe that the cores are physically called NV48 as well. There's proof of people having NV48 cores in their 6800GT (it's by chance), just do a google search.
Well I have a 6600 core sitting right next to me and it doesn't say "NV43" on it. A quick picture search for the NV40 shows me the same thing.
Eh, if I'm wrong about the 512mb 6800ultra then my mistake. I don't keep up on WAY overpriced cards from either company, especially ones that cost $1000. My mistake on that one. Makes sense though, considering the NV38 was the 5950ultra, and it was the same exact chip as the NV35 with .1v extra.
As for what shadow said about attention whoring...There's no attention whoring...
It's called dropping hints, it's something people have done for years with message boards.Much like how your favorite site, beyond3d, "drops hints" on ATi cards.
BTW, I'm not under NDA, I just figure there's no point in telling everything until I know everything. For now, I'm merely doing what's known as a "service", by telling the people there IS a much bigger fish in this sea.
If I was to say everything I know right this second, I'd just have about 10 pages of people screaming bs, like what happens everytime someone posts about having a new unreleased FX or intel chip. Words mean nothing, therefore, I'm merely saying there's a MUCH bigger fish in the sea, and leaving it at that. You can take it how you want to take it, believe me or call me out on it, doesn't really bother me. However, keep up the attitude about it and I'll honestly just keep everything to myself when that time comes(as in proof NO ONE can argue with). It's irritating to make posts that are meant to be helpful only to have flames occur....
I'm not giving a heads up for attention, I'm giving a heads up just to let you know, there's more with this came from.
I appreciate your posts, DilTech. They are informative and I do value them :toast:
-k0nsl