Nut sure how real or accurate this are but we get the 1st leaked benchs.
http://www.hexus.net/content/item.php?item=23032
http://img.hexus.net/v2/news/nvidia/...nchmarks/1.png
Printable View
Nut sure how real or accurate this are but we get the 1st leaked benchs.
http://www.hexus.net/content/item.php?item=23032
http://img.hexus.net/v2/news/nvidia/...nchmarks/1.png
these have been circulating for weeks now.
SKYMTL has been very positive of Fermi over at HWC, he certainly is not setting up a bad tone for it. I really have my doubts he could be so positive knowing it is a fail... and Charlie is hardly credible, he can make all sorts of claims and hope he is right with some but the majority of the time he is wrong.
Unless my timezone causes me to become blind (4 AM), I can't see why the marked out parts are.... marked out. :shrug:
Top Part:
Benchmark: PCGH-Save "From Hell's Heart"/Enthusiast details
Res./FSAA/AF: 1.920x1200/4x MSAA (in-Game)/No AF (Driver)
Date Published: 26 March 2010
Bottom Part:
Core i7 @ 3.5GHz, 175x20; SMT/TM off), Intel x58, 3x2.048MB DDR3-1400 (7-7-7-21)
Maybe it's just fake? I just can't figure out why those spots are marked out....
Great blurring, I for one totally didn't see the 1920x1200, 4xAA, no AF settings :p:
Crysis performance doesn't look too good, unfortunately. With a slightly overclocked new 5870 product ATI will be easily able to surpass it in Crysis.
They really should close down the website and come back as "rarelyifeveraccurate.com"
Likely they amongst a few other websites are just upset they'll never get review cards spewing all the BS they spew. Not trying to be a Nvidia fanboy just reading their comments and putting them into perspective.
Oh and actually I didn't even need to "try" to read it.... it's all the same as previously, apart from 16:1 AF to No AF :P.
http://e.imagehost.org/0779/Ati_Rade...head_3_PNG.png
5890 can and thus will happen.
Virtually all 5870 (850Mhz) can max out overdrive at 900. Most cards do around 920-935 @1.15V. It doesn't take a genius to figure out that better binning or 1.2V could get 950Mhz. But to stay within 225W, probably binning.
Memory on 5870 (1200) overclocks to 1300. I doubt AMD will bother trying to source some rare uber high end GDDR5, since most of performance improvement is from core overclocking.
Therefore, "5870 OC" is probably gonna be something like 950/1250 2GB (with overdrive headroom up to 1000). Despite +12% clock, performance probably only +5%... you know, diminishing returns.
===========
if I recall correctly, 5870 is 4 phase, and 5850 is 3 phase. For 4890 (5 phase?), AMD made better than 4870 PCB.. maybe they'll reuse that idea..
I'm wondering though (after looking at all the other reviews, Hemlock launch, Cypress launch, 5850 launch, 4890 etc): this is the only 1 where it says No AF (driver) instead of 16:1 AF (driver), as if they could not force AF with Fermi?...
Did not see this posted, if it was my apologies. Talks about the launch for Fermi and that they will have 2x more than 5800 series launch did.
http://www.brightsideofnews.com/News...id=1240&page=0
And this... GTX 480 sample with 512 SPs and possible 600Mhz (1200Mhz).
Be aware that the 5870 is set as reference card, so those are not FPSs in the graph below, but differences in %.
Tested games, 2560*1600:
Call of Duty World at War, 4xAA, 16xAF
Company of Heroes, Opposing Fronts
S.T.A.L.K.E.R. Clear Sky, NoAA, NoAF
Call of Juarez: Bound in Blood, Maxed
Enemy Territory: Quake Wors, 8xAA, 16xAF
Farcry 2, DX10, HDR On, 4xAA
Crysis Warhead DX10
Dawn of War 2, Ultra
Fallout 3, Ultra, High-Res Texture Pack
HAWX, DX10, SSAO Very High, 4xAA
Resident Evil 5, DX10, Maxed
Wolfenstein, 8xAA, 8xAF, Maxed
Batman: Arkham Asylum, NoAA, Maxed
The scores were a little higher for the ES card with the second set of drivers but nothing record-braking.
Core temp idle was 58C, core temp load around 86-90C, which was good.
It's laud, but not 5800Ultra.
In the tesselation tests, the GTX-ul won with +53% vs 5870, but this is as relevant as the 5-point parallelism issue where the The tesselation performance wasRAdeon wins with 350%+.
Idle power is at least 3 times that of the 5870 and the drained power at load was 283W. The official 480 will have less drained power because it will have a whole shader cluster disabled.
http://img709.imageshack.us/img709/9015/gtx480.jpg
Source: http://forums.amd.com/forum/messagev...VIEWTMP=Linear
in that graph the 5870 beats 480 in Warhead, but from other benchies we know 480 is actually faster. So the 700mhz has definitely sped everything up, so we can expect somewhat higher than the scores there, still not record breaking.
also, this is obv. where Charlie was getting his estimates from. Looks like the increase from 600mhz to 700mhz will prove him wrong, not by much though
By pure math...
32 shaders would be about 6.5% of the shading, lost...
but you'd gain about 17% clock rate... on the cores alone... and a good chunk more memory bandwidth.
You'd also be speeding up the ROP's whereas more cores don't do that alone...
Additionally the TMU's would be sped up as those are tied to the core clock and not shaders as far as I'm aware...
He also said the newer drivers added a couple % performance...
Also worth noting, this guy claims his Engineering Sample card had 512 shaders @ 600mhz, 1200mhz shader, and 2800 (GDDR5) RAM: full specs should be 480 shaders @ 700mhz, 1400mhz shader, and 3600-3700 (GDDR5) RAM.
So my guess is that with those #'s considered, it would be maybe 16-17% better performance for 700mhz @ 480 cores, vs. 600mhz @ 512 cores, taking into account the memory frequency as well. Couple that with a couple of extra percent from those tests adding 3% (he said a few) and we might see a graph looking a bit more favorable. Still, if this guy's accurate, it wouldn't be enough to make it what I would call a clean win for nV here.
I wouldn't say it reflects market share more than anything else. I'd say market share might be a small factor though. Regardless of how much, the end result is still that nv's drivers were inferior to ATi's, so the anti-ATi driver crusade has no merit. :p:
When the earthquake hit Chile, you were the first one I thought of (i'm not familiar with anyone else from Chile). I hope things are OK down there for you and yours.
One of the posts was already deleted which a user comments on in the thread... but he clearly said the clocks I mentioned before as what his ES came with. The one with that quote is top post on page 4 of the linked thread.Quote:
So the final GTX480 will have 480 shaders (one main unit disabled) and clocks upped to 675, so I'll pretty have the performance of the 512 shaders, 600MHz ES I tested, that being 3.6% faster on average then the 5870.
So judging by the price, power & heat - overall it's not worth the trouble.
I agree with your estimations of a 5870OC / 5890 however where the hell did you pull this 920-935@1.15V number from :confused: I've used 4 different 5870s ( one of which is my own ) and not a single one claims such a feat ( I doubt they'd do 850 at 1.15 for that matter ) AMD have even upped the stock voltage from 1.168-1.172 on the reference bios to improve stability.
If anything they needed 1.2 (mine needs 1.237) best case to get 925. Now this isn't to say with improvements it couldn't be done but this just doesn't seem indictive of reference launch cards ( the cards I've used were all revision 1 with initial bios, all tested in 2009 ) Sure a 4 card sample isn't huge however they were all from different vendors and all consistent in how much voltage they seemed to play nice with (or not). Anyways just a pety thing to kick dust up at, just found this statement to be a tad up in the air is all.
I'd love to see a 5890 with at least 950/1300 clocks coupled with 2GB of memory. That should offer a 5-15% performance bump over a 5870. Going by the leaks so far, this would be more than enough to challenge the 480s. Given those supposive 512sp leaks were at 2560x1600, in some cases with 8x AA, im shocked the 5870 doesnt fare worse with its vram handicap. Should bode well for a future ATI refresh.
Is GeForce Fermi asking ppl to invest their money to support Nvidia's cloud server future?
I believe BZ and I said it has the POTENTIAL to achieve this. Whether or not it will accomplish its goals is yet to be seen. :up:
What we BZ and I have said again and again is that people should WAIT before jumping too far onto a bandwagon claiming anything extreme.
As for Charlie's Dirt 2 story, not many reviewers are benchmarking with the demo version of the game and with the retail version you can physically force DX11 through the config file.
No potential at all. I only see an inefficient premature design intended to be a scalable cloud server product, disguised into desktop GeForce Fermi.
GTX480 has 3 billion transistors doing what? 2.812 billion are enabled and supposedly working.
Though the originally designed capabilities are disabled on GeForce Fermi.
* NVIDIA OptiX engine for real-time ray tracing
* NVIDIA SceniX engine for managing 3D data and scenes
* NVIDIA CompleX engine for scaling performance across multiple GPUs
* NVIDIA PhysX 64-bit engine for real-time, hyper-realistic physical and environmental effects
What's left? The power consumption.
http://img525.imageshack.us/img525/5...3img0197co.jpg
http://img11.imageshack.us/img11/180...6img0219co.jpg
http://img94.imageshack.us/img94/782...1img0207co.jpg
http://img222.imageshack.us/img222/3...5img0199co.jpg
http://img203.imageshack.us/img203/8...0img0204co.jpg
http://www.atomicmpc.com.au/Gallery/...-guide.aspx/15
niec pics! finally a clear one of the ihs and pcb
damn from last leaked benchmarks i think Fermi is disappointing :(
thank you for the enlightening post. btw, power consumption for the 480sp part should be around 250 watts.
does anyone know what this connector is for?
http://i71.photobucket.com/albums/i1.../gtx290pcb.jpg
Believe me or not but that the truth and W1zzard is not a noob too ;p
HD 4870 was really hot too especially with VRM's. Just have a look :
http://www.hardware.fr/articles/751-...n-hd-4870.html
Back to the subject, it seems that Fermi 1st generation isn't a great deal.
Hope ATI prices go down though, I don't want to pay a HD 5850 @ 250€, more 150-200€. Competition FTW :D
nVidia greenlight logo if I remember well
Nice thread. These are my thoughts and speculations:
ATI:
The 5000 series 40nm process is now refined enough to produce a 1GHz chip that coupled with 2GB or RAM will stay within the PCIe electrical spec and deliver a catastrophic 1 - 2 punch to Nvidia. That would be a formidable marketing coup for ATI to be first to market a 1Ghz chip *AND* take over the 480.
ATI will do it. Simply for the market dominant position and bragging rights of having the fasted single GPU. No matter what's the manufacturing cost of that hypothetical card, they will be able to ask 500.00$+ for it anyway and push Nvidia prices down when they already have no margin left on these GF100 cards. If in the next couple of weeks it's technically feasible for ATI to jump at Nvidia's throat for the kill, why wouldn't they do it?
Nvidia:
Out of the chipset market, bump-gate scandal, wood-screw puppy, multiple re-bagged cards, miserable execution of the 6 month late Fermi, their general attitude problem and the fact they have pretty much alienated everyone in the industry confirm 1 thing: Nvidia is way too full of itself and need an ego / reality check, fast.
The gaming side of the Fermi equitation is an after thought at best. It's a GPGPU with a bolted-on gaming chunk. I really hope for their sake that the double precision floating point power, ECC RAM and CUDA SDK with C++ capability will be enough to save this monster of a chip. Putting 2 GF100 chip on a single card is out of the question, the clocks are already at their maximum. Short of introducing a 512sp version very quickly or solely rely on driver optimization, there is nowhere to go up for them.
What you'll see on the 26 is pretty much what you'll get for the next year or so. Nvidia won't be a force to be reckoned with in the gaming market until Fermi die shrink to 32nm. Problem is, they will have to face the HD 6000 way before that. Nvidia won't be able to absorb blow after blow like that for many more quarters. There's a limit to keep surfing on the wave of your dominant market position without sinking.
Conclusion:
It's all sad because based on the benchmark already available it's clear where we are going. I would have liked Nvidia to be not only competitive, but dominant in order to keep the prices low. I think they need to restructure, rebuild their bridges and corporate image. The first thing Nvidia needs to do is to fire Jen-Hsun Huang before it's too late.
Ramon
Yeap , my HD4850 went up to 110C Shader and 116C MemIO while running OCCT stress test for few minutes and still running till today with no problem at all
Is that is part of the Metal heatsink showing on the surface , if so then adding a 120mm will get some nice temps and no need to run the stock fan at high speed
This looks about right, I guess... Not impressive, but decent enough.
Inefficient, yes. But it has potential. There could be quite a few interesting desktop applications, heavy tessellation environments being one of them.
There is no real-time ray tracing.
The other stuff you mentioned is pure marketing...
I never said he was a noob.
And do not confuse VRMs with GPU chips.
VRM has a different structure and can operate at higher temperatures. Most GPU VRMs are rated for 120C. But the GPU chips are definitely not.
I try to not be wrong... I don't assume things often.
Also your logic =| my logic. So no, the same logic isn't being used.
Is this good enough for you?
http://cdn.i.haymarket.net.au/Utils/...pg&h=450&w=665
http://cdn.i.haymarket.net.au/Utils/...pg&h=450&w=665
Samsung GDDR product sheet
K4G10325FE HC04
K - Samsung
4 - DRAM
G - GDDR5 SGRAM
10 - 1G 8k/32ms
32 - x32
5 - 8Banks
F - 7th Gen
E - 6th Gen
H - GDDR 170FBGA
C - Commercial Normal
04 - .4ns (5Gbps)
So the same memory ICs as 5870, so if the MCs aren't limiting frequency, ~250Gbps is possible w/ a memory clock of 5.2ghz.
Maybe they purchased the high binned GDDR5 before they knew they messed up the MC(Charlie) or that they couldn't use the full clocks due to a power/TDP wall
and decide to leave it on there to allow users/AIBs to overclock them.
LOL.... No, it's not "Charlie"!
I'll admit, I *HATE* corporate arrogance. From that perspective, Nvidia is very easy to hate and it shows in my "editorial" position. Another example: Followed Apple business practice lately? I just can't stand it. In Apple's case, on top of the ego-maniacal, self appointed emperor Steve Jobs's tyranny, you get overpriced and somewhat palatable products. I'm not saying ATI/AMD is all white. It's just hard to deny that they're just not in the same league when it come to business practice and corporate attitude.
But this thread is about the new and upcoming Fermi, isn't it? Then lets put my logic to the test with arguments that disprove my theory:
FACT: Out of the chipset market
FACT: Bump-gate scandal
FACT: Wood-screw puppy
FACT: Multiple re-bagged cards
FACT: Miserable execution of the 6 month late Fermi
FACT: General attitude problem
FACT: Alienated everyone in the industry
FACT: Putting 2 GF100 chip on a single card is technically impossible at 40nm
FACT: At 700Mhz, the clocks are already at their maximum because of heat.
FACT: Driver optimization and a 512sp version are the only way to gain speed on Fermi. What you see on the 26, except a 14% bump (7% sp 7% drivers), is what you'll get until 32nm.
FACT: Unless a VERY dramatic turn of even happen, ATI will launch the HD 6000 series in Q2 2010, WAY before Nvidia can shrink it's Fermi to 32nm.
Do you deny any of those previous statement? If so, please explain.
My opinion: Fermi gaming capability is an after thought.
My opinion: Fermi is first and foremost a GPGPU with a bolted-on gaming section.
My opinion: DPFP, ECC RAM, C++ and superior SDK are the real differentiator that will make or kill Fermi in the GPGPU market.
My opinion: If Fermi fail to sell significantly as a GPGPU and to a lesser extent as a gaming card, that can only bad news for Nvidia and all of us.
My opinion: Nvidia can't afford to miss the boat like that too many time in a row without seriously impacting it's long term viability.
I would like to hear you opinion about those specifics point too. I'm fully aware that no official benchmark are available and announcing Fermi's death right now would be premature. But you have to admit, based on what we already know, it's gona be FAR from the 5870 killer it was supposed to be. In fact, it's clearly going to be an attainable target for a 5870 refresh.
Lets put it that way: A cheaper 5870 that consume 62 Watt less under load, and even much less at idle, run cooler in your casing, with less noise from the fan and run your game at 85-90% of the speed of the 480 OR a 5890 that run same/faster, with more RAM than 480 for the same price as a 480. Which one you choose? 5870, 5890 or the 480? I would take the 5870 and overclock the hell out of it if and when I need it. That is, unless Fermi end up 40% faster than the 5870 instead of just 10-15% *AND* if they fire Jen-Hsun Huang. Then I'll buy TWO 480 and put them in SLI. LOL.
Ramon
My 5890!
:D:D:D
http://www.abload.de/thumb/graka2252u.png
:D:D:D
You must even choose the good ones man!
the heatsink design looks very nice... looks like you can remove the heatsink and leave shroud, mem sink and vrm sink and everything else on, so going h2o or mounting a bigger heatsink should be really easy and fast...
in that regard nvidia did a really good job...
Hello all.
I'm not sure, but I am afraid that with this cooling, 5870 could work with 1000MHz stock clock. For only 50$ more? Is this ATI idea for competition?
out of interest how long until we see something to go up against the 5770
I don't see Charlie backpeddling, but i see a lot of backpeddling from a lot of people on this thread, including out of the thread for some time now type of backpeddling.
Fermi is even worse then i though, and that's why i stopped making fun of nvidia, cause in the end i always thought a decent card would come up, so it was ok to make fun of the announcements of the announcements.
All amd needs to do is a 2gb 5870\50 with a couple more mhz, and 5970 can remain at 700$.
Fermi = Nvidia K10
Msi gtx480
Source: http://product.pcpop.com/000267783/Index.html
Quote:
msi MSI N480GTX-M2D15 graphics core of the main parameters: Geforce GTX 480 Memory Type: GDDR5 core frequency: 700MHz Memory Capacity: 1536MB Memory Frequency: 3696MHz Memory Interface: 384bit interface type: PCI Express X16 2.0 Rapid Positioning: fundamental parameters Other parameters of the basic parameters desktop-class graphics card properties
Geforce GTX 480 graphics core
GF100 core code
Core manufacturer NVIDIA
Graphics core 40-nanometer process
RAMDAC Frequency 400MHz
View the graphics core frequency of 700MHz quote
Memory bandwidth 177.408GB / s
memory frequency of 3696MHz
type of GDDR5 memory
Memory Package FBGA
memory capacity of 1536MB
memory interface 384bit
480 stream processing units
frequency of 1401MHz Stream Processing
Interface Type PCI Express X16 2.0
Maximum resolution of 2560 × 1600
Other parameters HDCP cooling fan + heatsink means
Display Interface Dual-DVI + Mini HDMI LCD Monitor Price view
3D API DirectX 11
Multi-card technology SLI (Super Power) Technology
http://www.hexus.net/content/item.php?item=23079Quote:
Sapphire HD 5870 TOXIC 2GB Review
Clock speeds: 925MHz/5,000MHz
http://img.hexus.net/v2/graphics_car...58702GB/03.png
We take a look at the Sapphire Radeon HD 5870 TOXIC 2GB just before GTX 480 launches. Let's see how much it beats up on all current competition and conjecture whether it stands a good chance against you-know-what....
The results are not a mistake. A 2GB frame-buffer enables the Radeon HD 5870 GPU to breathe when really taxed. Indeed, it's faster than a Radeon HD 5970 - a card that has to share its frame-buffer between the GPUs.
A near-30 per cent performance increase over a generic HD 5870 is down to more than just clock-speed, clearly.
yes, and a 2 GB 5870/4GB 5970 , OC editions will probably be the best cards for 2560/1600
http://futuremark.yougamers.com/foru...70&postcount=1
Quote:
Thanks to a "certain" friend of mine who's under the NDA, i've had a quick look at nVidia FTP with pics, "reviewer's guides", drivers an most improtantly a whole plethora of the GTX470/480 benchmarks against the HD5850 and HD5870 and from what i've seen, the GTX480 beats the HD5870 easily by 25% on average. Sometimes it's 10% sometimes 50-60% but the GTX480 always wins.
They used the 10.2 ATi cats for their comparison by the way.
So yeah, don't read too hard into certain people's fear mongering anti-nVidia BS.
"reviewer's guides":rofl::rofl::rofl:It doesnt make any sense like those PR PPTs...
Chiphell's nApoleon has reviewed GTX480.He expected it would have 20% more performance than HD5870 at least,but its performance disappointed him.
May be , but there is a high chance he will be wrong about that too , let's just wait and see .
Thanks for the welcome , Yes I am from B3D .. been waiting to post here since October 09 , the activation of my account was put on hold , I don't know why , not that I regret it anyway , you guys have done a wonderful job of messing with 3 threads ! disastrous !!Quote:
welcome btw, are you from B3D? I remember seeing you on some other forum
Anyhow , this picture :
http://i42.tinypic.com/9vgvt4.jpg
It is fake , the site uses conventional names for it's drivers (like Cat 10.1) , not version names ..
Here is the confirmation that it is fake :
http://translate.google.com/translat...tcount%3D18157Quote:
This is a fake. Our benchmarks are different.
http://img33.imageshack.us/img33/3743/55605712.jpg
There is someone , or some group that is trying to mislead the fans with fake , undermined results for Fermi , they are exploiting the strong veils of secrecy invoked by Nvidia .
Anyhow , let's wait and see how it turns out !
Your link is down Graham
My initial previsions were correct if this is verified as it completely shades the 5870 and licks 5970's feet. I sincerely doubt this is the case, but if true i'll sell my 5970 in a blink :p:Quote:
Thanks to a "certain" friend of mine who's under the NDA, i've had a quick look at nVidia FTP with pics, "reviewer's guides", drivers an most improtantly a whole plethora of the GTX470/480 benchmarks against the HD5850 and HD5870 and from what i've seen, the GTX480 beats the HD5870 easily by 25% on average. Sometimes it's 10% sometimes 50-60% but the GTX480 always wins.
sorry back online :
http://www.forum-3dcenter.org/vbulle...ostcount=18157
OK,let's wait and see how it turns out:clap:
I would like to see how those hype become jokes.
Omg they got it like 4 days ago.. And still now benchmark? What do they do with gtx480? xD
To clear things up, Raffael works for the German Hardware Site PCGH (PC Games Hardware) and stated that these results were fake. The test system they used differs from the system on the screen as well as the drivers and the selection of cards benched. The fps numbers they will show in their test will show only integer values.
Hope i could help. Greetings from Germany ;)
reviewers guides try to fool refviewers by hilighting good points and ignoring bad points... if you know how to handle that, then thez are accurate, yes... if you think its a good roundup of pro and con of a product, then no, they are very misleading... :)
that doesnt mean the numbers are fake though... if a site posts results they get into trouble, but if they post a picture of "leaked" results from another site... thats not breaking anz ndas ;)
i think the numbers are fake though, wasnt the 470 supposed to be slightly faster than the 5870 in crysis warhead?
That is the sexiest looking reference heatsink I've ever seen :hump:
I am talking about the benchmark numbers that accompany ATI's and NVIDIA's reviewer's guide. You are confusing the reviewer's guide with the PR deck.
Remember, the PR slides usually try to highlight only the positive points while the reviewer's guide usually acts as a factual document listing specifications, talking about core layout and using pure benchmark numbers (no 100%, etc crap) without PR nonsense.
I think Fermi Heatsink is shaping out to be n1 best designed reference HS but second place also belongs to Nvidia, me thinks this is n2 hands down:
http://www.3dnews.ru/documents/12278...-front-big.jpg
http://www.3dnews.ru/documents/12278...cooler-big.jpg
http://images.bit-tech.net/content_i.../heatpipe2.jpg
http://www.xbitlabs.com/images/video...x_angle_sm.jpg
ati cards had always fuglier coolers since x1900 series but custom models rock especially iceq and vapor series best coolers imho
Anybody seen this yet?
http://www.tomshardware.com/news/fer...tage,9965.html
ASUS GTX480 50% faster with overvolting!
Overvolting? Seriously? I thought we were working with that safety bear to try to reduce household electrical fires.