...Could of been a S3 Virge against a Matrox mystique inside a Cray Jaguar for all we know.
I only read the Hexus piece in the link I sent so I know nothing of the machines innards being exposed. Besides, the dxdiag screen could of been faked and if Nvidia had a hypnotist on the payroll who is to say what was seen?Not true at all. The side panel of the machine was open allowing the people in attendance to see everything inside; from memory to the GPU to the mobo to the PSU. Finally, NVIDIA offered to open up the dxdiag screen. How do you think their use of a 960 was confirmed?![]()
Is there a tongue in cheek smiley? Because the winking smiley failed in my previous post.![]()
Fun Box: Asus P8Z68-V GEN3++Corsair AX850++i5 2500k@4.5Ghz-1.272v++Corsair A50++2x8Gb Corsair Vengeance++MSI R7970 Lightning++Audigy2 Plat-EX++TBS 6280 DVB-T2 tuner++256Gb OCZ Vertex 4.500Gb Caviar Black.500Gb Seagate Barracuda++Sony AD7240s++Lian-Li PC-60++Linux Mint/Win 7++Asus P238Q
Work Box: Gigabyte H61MA-DV3++Corsair HX620++i5 3450@stock++2x8Gb Corsair Vengeance++120Gb OCZ Agility 3++Linux Mint
Quantum theory in a nutshell: It's so small we don't know where it is, it could be here, it could be there.
Just 'cos it's legal don't make it right.
Good read.
I feel like watching all the conspiracy theory movies and a few X-files episodes with some of you here
What if nVIDIA used a Radeon HD 5870 and can't produce a working Fermi.
What if the world as we know it ends tomorrow.
What if...
Good lord...
Coding 24/7... Limited forums/PMs time.
-Justice isn't blind, Justice is ashamed.
Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P.), Juan J. Guerrero
come on spill it pretty please![]()
Maybe what i heard of those slides being true is true....
Those slides can be true i heard they were shown to the AIB partners, etc... who knows....
What if BenchZowner was a bit more fun and entertainingspeculation is F--U---NN
Last edited by ajaidev; 01-20-2010 at 12:21 PM.
^ lol
lol well my guesstimate is 400-500$
though still keep expectations low/leave room for surprise
spill what.. my glass is empty
all the fermi vids/all the talk about those vids.. thats "old" fermi
if nvidia all wanted to do is beat 5870 thats easy.. peanuts! fermi out in retail already
- theres room for clocks: thats whats taking longer
- theres room for drivers: a lot of room for improvements
- theres room for oc: just as 5870/5970 can oc ~20%~ so does fermi
- theres room for other stuff too
some of you have forgotten.. some of you dont know.. some all know about nvidia is renaming.. about who the hell nvidia is.. they dont like to lose!![]()
Last edited by NapalmV5; 01-20-2010 at 12:54 PM.
If what you say is true, than that is a really good thing, i love competition and i actually want fermi to be up there with 5970, because i work as a 3d artist and for gpu rendering purposes i need a Fermi as my new upgrade.
So, let's all hold hands and sing praises so that Jen and his magic green goblin team can release a great product.
Right guyz, WE need graphics competition and, above all, we need kick*ss games and hope DX11 could bring a real difference vs consoles.
I agree
As soon as a >1GB VRAM card which pushes more pixels than a GTX 285 is released I am going to buy one.
I am currently livid with BFG see here as to why. It's no surprise BFG have pulled out of Europe
I just hope eVGA or some other decent brand produces high quality Fermi cards.
John
Stop looking at the walls, look out the window
^ try supreme commander/2 players/lots of units/triple buffering: 1.7gb easy @ 285 2gb and still playable while 1gb 285 doesnt/cannot sustain such brutality
1gb cards ati+nvidia all theyre good for is benches.. at least the way i see it![]()
Yes we need competition !
The best scenario could be something like this:
nVidia releases a single chip GTX 380 which barely matches the performance of HD5970. That's all nVidia needs to do to ask a high price.
ATi releases a refresh (if not a new HD5980?) right after, and the gets just ahead.
In this scenario we would be able to get a great GTX 380 for a good price, otherwise I'm afraid a superior nVidia single GPU (aka a repeat of 8800GTX) won't com cheep.
► ASUS P8P67 Deluxe (BIOS 1305)
► 2600K @4.5GHz 1.27v , 1 hour Prime
► Silver Arrow , push/pull
► 2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
► GTX560 GB OC @910/2400 0.987v
► Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
► CM Storm Scout + Corsair HX 1000W
+
► EVGA SR-2 , A50
► 2 x Xeon X5650 @3.86GHz(203x19) 1.20v
► Megahalem + Silver Arrow , push/pull
► 3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
► XFX GTX 295 @650/1200/1402
► Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
► SilverStone Fortress FT01 + Corsair AX 1200W
Let's play what if since you guys are so fond of these mind games
@Sam oslo...
and what if nVIDIA releases a dual GF100 in April ?![]()
Coding 24/7... Limited forums/PMs time.
-Justice isn't blind, Justice is ashamed.
Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P.), Juan J. Guerrero
It Is not about what if. It is about a very possible competition scenario, that happed before.and it is a good chance for happening again, soon or later.
In this scenario, that dual-GF100 would become even more interesting to follow. Don't you think so?
EDIT: 2 scenarios could explain the necessity for releasing a dual-GHF100. Either GTX 380 is behind the Hd5970 by a good margin, or will become behind a refresh (or new GPU which ATi is going to release soon). What else could be the reason for a Dual-GF100, you think?
Last edited by Sam_oslo; 01-20-2010 at 04:43 PM.
► ASUS P8P67 Deluxe (BIOS 1305)
► 2600K @4.5GHz 1.27v , 1 hour Prime
► Silver Arrow , push/pull
► 2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
► GTX560 GB OC @910/2400 0.987v
► Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
► CM Storm Scout + Corsair HX 1000W
+
► EVGA SR-2 , A50
► 2 x Xeon X5650 @3.86GHz(203x19) 1.20v
► Megahalem + Silver Arrow , push/pull
► 3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
► XFX GTX 295 @650/1200/1402
► Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
► SilverStone Fortress FT01 + Corsair AX 1200W
i knew id bring the positive out of you
fermi vs 5970
512 vs 3200
384bit vs 2x256bit
512 fermi shaders beat 3200 radeon shaders.. just as the actual 384bit mem bandwidth beat 2x256bit actual mem bandwidth and thats even if they stick to 4.2gbps
so much for radeon shaders hyped as "more efficient"
mind games.. oh that 400GB/s (5gbps) fermi ??![]()
Last edited by NapalmV5; 01-20-2010 at 07:51 PM.
i wouldnt go that far. ATi has the hands down best shader units. the problem is there is no games can use this because this 3d graphics, not shading.
im referring to actual 3d performance.. thats just numbers
what if they hold it together with wood screws? haha jkjk
looking forward to seeing what fermi can do!
and so ATI 5000 series will come down in price :p
FX-8350(1249PGT) @ 4.7ghz 1.452v, Swiftech H220x
Asus Crosshair Formula 5 Am3+ bios v1703
G.skill Trident X (2x4gb) ~1200mhz @ 10-12-12-31-46-2T @ 1.66v
MSI 7950 TwinFrozr *1100/1500* Cat.14.9
OCZ ZX 850w psu
Lian-Li Lancool K62
Samsung 830 128g
2 x 1TB Samsung SpinpointF3, 2T Samsung
Win7 Home 64bit
My Rig
You do realize that the "3200" shaders are just how ATI counts it (marketing speak) right? You've got to divide that by 5 to get the equivalent count for Nvidia... so 640 to 512, and that's not counting the fact that Nvidia has a hot clock meaning the shaders on the Nvidia part are overall more, equivalently
As for your bandwidth talk, that's hilarious seeing as how you once were championing 512-bit as a necessity, when 256-bit + GDDR5 did just fine when it came to the RV770 vs. GTX285
Slow down on the kool-aid thar
I don't see Fermi selling for 400-500$ at all unless it's performance really is under the 5970's by a decent margin. No way has Nvidia EVER sold a top end card without a premium. See: GTX 280 vs. 260 prices at launch.
And I'm pretty sure people have gotten in trouble for claiming to know stuff on this forum without substantiating it
Last edited by zerazax; 01-20-2010 at 08:36 PM.
hmmm and i thought you had some inside info seems you just based your claims on your flawed logic. shader comparison is funny though if you divide atis shaders to 5 that means atis 160 lower clocked shaders matched highly clocked 240 shaders and unless fermis shaders are some sort of miracle its really hard to believe that only 512 shaders can match 3200. Fermi has few undeniable advantages though higher 'memory and bandwith' and being single gpu are important ones
How about this scenario; The Fermi sits in between the 5870 and 5970 in current games (but we are still talking about > 60 fps). BUT it is much quicker than the 5870 and beats the 5970 in the Unigine benchmark (geometry) and is the quickest in DX 11 games?
Now that would make a lot of people think.
Bookmarks