Have 3dmarks scores from a card ever been leaked this early?
Printable View
Have 3dmarks scores from a card ever been leaked this early?
~2 months? Yes.
Fake scores have always been leaked early...
6.4ghz Gddr5.....
a while ago people said there was a memory bandwidth limitation on ati cards
http://forum.beyond3d.com/showpost.p...postcount=1434
Quote:
ATIRadeonX3000AMDCaymanHardware
ATIRadeonX3000AMDBartsHardware
ATIRadeonX3000AMDTurksHardware
ATIRadeonX3000AMDCaicosHardware
give link to original ok? http://netkas.org/?p=539
jesus christ... thats almost double the gddr5 clock of a 460 :eek:
and almost double the vantage X score too... :eek:
so maybe thats why nvidia didnt launch a 490?
cause ati will be able to beat that with a single gpu card within a few weeks? :eek:
looks like there will be a red 460 soon... i wonder how much itll cost...
and damn... 12k vantage X with a single card, at stock!
just imagine quadfire with those cards... itll be ridiculously fast... i mean you could probably play on several 30" displays... in 3d... with everything maxed out... :slobber:
and a Unigine score from same source.
http://we.pcinlife.com/thread-1500103-1-1.html
http://image155.poco.cn/mypoco/mypho...860315_000.jpg
Well tbh thats the "problem" there is no real reason to upgrade anymore as games are not demanding enough, we have enough GPU power now and cpu is the same really. Its only if you play in 25xx x 16xx or multidisplay that these new cards are worth upgrading to, I dont get why I bought a HD5870 in the first place, I could play all my lousy RTS games on my 8800GTS 512 anyway :s
Ps: I hate my inner upgrade junkie!
Pps: selling my computer now and getting a Thinkpad X201t and a ViDock like solution.
Just found a Unigine 2.1 test with GTX 480/5870.
http://www.geeks3d.com/20100525/quic...-tessellation/
Results:
EVGA GTX 480 Scores
GTX 480 clocks – GPU:700MHz, memory:1848MHz and shader:1401MHz
OpenGL 4.0
Res: 1920×1080 fullscreen
Tessellation mode: Extreme
- Score: 916
- FPS = 36.4
- Min FPS = 6.1
- Max FPS = 89.4
Direct3D 11
Res: 1920×1080 fullscreen
Tessellation mode: Extreme
- Score: 970
- FPS = 38.5
- Min FPS = 17.6
- Max FPS = 95.5
GeForce GTX 470 Scores
GTX 470 clocks – GPU:607MHz, memory:1674MHz and shader:1215MHz
OpenGL 4.0
Res: 1920×1080 fullscreen
Tessellation mode: Extreme
- Score: 719
- FPS = 28.5
- Min FPS = 5.9
- Max FPS = 70.4
Direct3D 11
Res: 1920×1080 fullscreen
Tessellation mode: Extreme
- Score: 768
- FPS = 30.5
- Min FPS = 13.5
- Max FPS = 76.1
Radeon HD 5870 Scores
- HD 5870 clocks – GPU:850MHz and memory:1200MHz
- Drivers: Catalyst 10.4
OpenGL 4.0
Not supported yet.
Direct3D 11
Res: 1920×1080 fullscreen
Tessellation mode: Extreme
- Score: 531
- FPS = 21.1
- Min FPS = 6.5
- Max FPS = 74.4
It looks like Cayman will kick ass, since now is up to par with GTX 480 in DX11 but better in anything else, for a lot less TDP.
No AA?
http://www.hexus.net/content/item.php?item=25391&page=6
1,920x1,080 with 4x AA and 16x AF
http://img.hexus.net/v2/graphics_car...X4612/Uni4.png
That's true, it had no AA, but couldn't find another one that fast :P. What is obvious is that with 4x AA and 16 AF, the future 6870 is faster than any other single GPU in Unigine, which is impressive for a refresh.
From S|A forum:
Quote:
It is a bit bigger than Evergreen, and a hair bigger than GF104. I think it is in the 380mm^2 range, but I could be wrong there.
-Charlie
guys, we're going in circles here. could we keep the discussion in one thread instead of the same pics etc in 2 seperate ones? this would make keeping up with new posts a bit easier imo :)
what i mean are the coherencies between this thread and the "6870 benchmarked" one.
Impressive scores! Much faster than 480!
Damn, seeing these results I really regret purchasing a 460 1Gb. Should have waited just a little longer :mad:
I also think that the benchmark thread and this could be merged.
HD 6xxx series especially that HD 6870 looking to shape up very well, like better than most people's expectations I bet taking into account this refresh 40nm had rather uncertain existance for starters and been on quite tight schedule with not much room for improvements. I do believe in these benchmarks not being fake, cuz somehow every rumors handed out so far, pricing, specs, performance seems to go rather hand-in-hand and by logic it was almost certain ATI/AMD would aim to make 6870 slightly faster (OK I personally think it looks a bit more than "slightly" so far) than GTX 480 and the full 512 SP variant to make life even more difficult (read lose market share). But everytime things don't go as well as you planned but looks like HD 6xxx development have went VERY smooth (again not suprising as it's still 40nm which they master by now and this isn't a drastic change from previous series). Would really be interesting to hear an interview with the person in charge of HD 6xxx development.
http://www.xbitlabs.com/news/video/d...s_Sources.html
No more ATI brand in 6xxx said by xbit labs ...
I'm going to ROFLMAO if these benchmarks turn out to be fake :rofl:
They seem too good to be true, and you know what they say about being too good to be true... Fake benchmarks always circulate before a new card is released.. You guys should know that by now, so don't put all of your eggs in one basket eh? :p:
And correct me if I'm mistaken, but memory overclocking never had much of an impact on HD 5870 performance compared to overclocking the core..
So with that said, where does the assumption come from that the HD 5870 architecture is bandwidth limited?
I wouldn't say it was bandwidth limited, but I will say that while the 5870 was a good card there were 'flaws' in its design that meant it rarely got the most out of it's shaders.
The changes needed to fix them did not seem very hard to implement and in fact seemed to be similar to the changes that nvidia have made with the 460.
However we still don't know if these benchmarks are real, so until then let the speculation continue.
Yes, I'm so depressed, that I'm going to put my 480s on Ebay right now and start fasting until the 6870 arrives ;)
Thanks, but no thanks. I tried ATI before......never again....unless their product is superior to Nvidia's on all fronts, and that includes Crossfire.
Welcome to the crowd, (fan)boy! :up:
If these are true, then Your 480 SLI setup will look INFERIOR compared to 6870 on all fronts but not necessarily in performance, no? :shrug: Then again, it's ok to be different. :)
Hopefilly Nvidia comes up with something, rather soon...
Not really. It may be slower, but inferior overall? I think not.
A few reasons why I, and many others have a strong preference for Nvidia are:
1) Driver support
2) Extra features ie PhysX, 3D vision, CUDA etc
3) Better filtering.
Number 3 in particular has always infuriated me concerning ATI. Since the 4800 series, GPUs have had so much raw power that there's no need to resort to filtering tricks anymore to increase performance.
Yet ATI still has their "brilinear" filtering optimizations to this day :rolleyes:
they used faster memory than nvidia, but on a narrower bus
the cost to add a wider bus, and the cost for extra memory chips seems to offer a worse price/perf ratio than to leave it as is.
if they increased the memory bandwidth by 30-40%, then costs would have probably gone up by 20-30%, while perf might have only gone up by ~10%, and power consumption up by 8-15%
AMD must really be hurting if they have the balls to keep their prices the same since launch
Exactly, which goes to show that the architecture itself isn't constrained by memory bandwidth. If it was, the performance increase from widening the bus would be much greater than 10%.
The same thing with Fermi. A 512 bit bus would have been useless on the 480 in combination with the greater bandwidth already provided by GDDR5.
The only things that would have increased were complexity and power usage.
So assuming these benchmarks are true, it can't be memory bandwidth that has increased the performance, nor the added shaders. It would have to be something else.
I'm no engineer though so I'm not even going to speculate :shrug:
If the scores are correct this is how it may play out:
1.Nvidia launches a full dual GF104 "384 shaders not 336" based GPU lets call it GTX 495
2.ATi releases the 6870 the card in crossfire makes life for the GTX 495 very hard.
3.Nvidia's partners launch overclocked versions of GTX 495 around max possible TDP.
4.ATi releases the 6970/6950 and become the raining chaps once again for single PCB cards.
When a new GPU from nvidia is on market, old GPU don't have any update in drivers. Yeah support is better on green way ...
Drivers need to be improved in nvidia, they need improve how it's designed, you can be lost in this pannel ...
Nice but What do you do with cuda ? Are you a programer on CUDA yourself ? Or you use the 2-3 programs that runs on it ( and ATIs too ) with some demos ? It's nice Stuff. PhysX is nothing really good for futur graphics. So i don't want it. 3D systems tryed to exist already 10-15 years ago, but was a fail. It's gonna fail again. A lot of people are already wearing glases. They can't wear two in same time. So it's gonna fail again.Quote:
2) Extra features ie PhysX, 3D vision, CUDA etc
Saying something is not a proof. ATi's filtering is far better from old RV770, proof :Quote:
3) Better filtering.
http://www.hardware.fr/articles/770-...5870-5850.html
And i've don't found the page on fermi, but i remember to have seen a beter quality from RV870
I don't have any problem with filtering. I can even run HL2 EP2 in AA16x/AF16x, 24x is a bit slow but playable too.Quote:
Number 3 in particular has always infuriated me concerning ATI. Since the 4800 series, GPUs have had so much raw power that there's no need to resort to filtering tricks anymore to increase performance.
Yet ATI still has their "brilinear" filtering optimizations to this day :rolleyes:
LOL, the card hasn't even arrived, yet the ranting already begun. :rofl:
I think GTX 480 will still be SUPERIOR .... as a room heater, remember, the cold days are arriving in the northern hemisphere. :up:
It's OK to be satisfied with a product that you like, but to crap in competitor's product thread, well that shows how "mature" that particular person actually is. :down: You CAN & has the right to doubt these numbers, but to bring driver FUD, features FUD, IQ FUD into conversation/discussion, doh ???? :ROTF:
dont think its about people thinking they overlooked it... they thought gddr5 can only clock that high, and its not enough... and they thought that ati made a decision to rather cut costs and go for 256bit only even though it wouldnt be enough bw...
i think theres some truth to that, 320bit would probably have helped rv870, but i dont think it would have done much... and it would have made the cards notably more expensive...
i wonder how much actually... 25$ maybe if you count it all together? pcb, extra memory chips, packaging, additional transistors...
probably less than 25$ i think?
While Nvidia may have bigger resources for driver support they seem to break stuff often with new drivers too so it's not like Nvidia is like free from bugs either. For example ever since 19x.xxx drivers I've had alt-tab issues with some games, especially UT3, clocks get stuck in 2D mode or otherwise you just get very poor performance after an alt-tab or two (like 250 fps drops to 40-70 fps, not that I play with 250 fps but constant 120 fps rather) and you have to reboot to get back the FPS, highly annoying, currently on 259.32 beta and the issue still persists, might as well go back to 182.47 driver soon cuz that one worked without issues...
Speaking of drivers, does any1 know if ATI supports 120Hz in non-native res yet in latest catalyst? This is such an important thing for me using a 120Hz LCD, I'd gladly pick an ATI card but since I've heard it seems like it's not working with 120Hz in non-native res I've sticked to Nvidia so far cuz 120Hz support is better.
What? :confused:
I don't use CUDA but many other people do. Fermi is not just a gaming GPU you know.Quote:
Nice but What do you do with cuda ? Are you a programer on CUDA yourself ? Or you use the 2-3 programs that runs on it ( and ATIs too ) with some demos ? It's nice Stuff. PhysX is nothing really good for futur graphics. So i don't want it. 3D systems tryed to exist already 10-15 years ago, but was a fail. It's gonna fail again. A lot of people are already wearing glases. They can't wear two in same time. So it's gonna fail again.
As for PhysX, thats your opinion and you're welcome to it. Personally, I love a good PhysX implementation. It can really change the atmosphere of a game for the better.
Batman AA and Mafia 2 are the best examples of really good PhysX implementations.
As for 3D Vision, again, thats your opinion. But you're wrong on one thing.
3D isn't failing. Why do you think there are so many new HDTVs coming out that support 3D capability?
I recently bought a 58 inch Samsung 3D Plasma in fact, and the 3D material on it was surprisingly good. Nvidia is also coming out with a device that will allow you to play 3D games on 3D capable HDTVs rather than LCD monitors.
LOL, you must have missed the big debate we had on the forum concerning ATI cheating in Crysis.Quote:
Saying something is not a proof. ATi's filtering is far better from old RV770, proof :
http://www.hardware.fr/articles/770-...5870-5850.html
And i've don't found the page on fermi, but i remember to have seen a beter quality from RV870
Read this thread.
Also, that D3D AF tester doesn't mean squat when it comes to actual image quality in real games.
Click here, and you'll see that ATI's filtering quality in games is actually inferior to that of Nvidia's.
Guru3d also noticed it as well when they published their Starcraft 2 GPU performance article.
Can someone run Unigine Heaven and post a comparison pic? I'd bet a bunch that the Scores number is actually bolded by the benchmark.
Are all the marks from the same source? IINM some marks are from the same person giving Unigine marks and some marks are from someone else.
If they want to fake it at least put some effort into it. It's basically a damn HTML file IINM. Yeah, I'm just dreaming :p:
It seems that the data is retrieved from HTML file, no? If so, then it just proves that the bolded score is part of the Unigine layout and that means nothing.
However... It also means that simply by editing the HTML file one can produce fake screens, no? :| If so, then *** these results!
Looking at the pic, all the values seem to be bolded, with an exception that the Min FPS and Max FPS values of the ATI run do not seem so bold. But for example the ATI Score's 9 is identical to the 9 seen in Nvidia's FPS. So are the 2's compared to Nvidia Max FPS.
Why the Min FPS and Max FPS seem less bold? They're clearly bolded for Nvidia...
btw, it strange how point of view chages logical conclusion on subject. Take these two:
"it must be fake as numbers are not of same size in score and fps"
"it must be real as numbers are not of same size in score and fps"
First one suggest that it was poor copypaste, second one suggest it was not copypaste for same reasons..
Altough it would be logical that score is bolded more than intermediate results, I cant say for sure if it is the case, so could someone post real unigine run pick?
EDIT: yes it seems that in nVidia screen all the numbers are of the same bold, so likely fake.
Here's my special faked pic. You can take a look at all the 2's all you want, all day.
http://img408.imageshack.us/img408/7316/unigine2.jpg
Conclusion: Those scores are faked, unless they purposely edited the html to use a different font.
So for the ATI runs Min and Max FPS numbers have been altered with 99% certainty. Makes no sense. :confused:
True, the scores and the FPS seems to be using the same font as an unedited ones.
http://img408.imageshack.us/img408/7316/unigine2.jpghttp://i185.photobucket.com/albums/x...860315_000.jpg
Side by side... well not really accurate, but it's good enough. No visible browser zooming there.
Okay.. interestingly, everything on my screenshot is bolded. EDIT: He's using IE. Still, the scores are still shopped (difference in bolding).
Or he deliberately wanted us to believe it's fake even if it wasn't. xD It would be quite a noob if he really wanted to make a convincing effort by not copying the real font.
One doesn't even have to copy the fonts, or even open any image manipulation software to do this. Notepad should do the trick?
Yawn. So the 3DMark scores could be fake aswell? Since without a doubt, these are.
It's pretty annoying. The fonts are perfectly aligned too. Either that's some dedication in photoshopping (checking pixel by pixel), or he simply edits the html file, change the font, and do what you just said. I'm betting on fakes. We know how secretive AMD can be.
Any way to check if at least his FPS & score matches. Like does any1 know how to calculate the "score" from the fps result.
yeah, true @unigine screenshot. it was edited obviously. the fonts look the same for avg fps, score and max/min fps on a legit screenshot. however, who comes up with the idea to fake this screen in photoshop/whatever instead in the html file itself - no one would have noticed if it was done in html, lol.
Hehe yeah. I could totally fake one screenshot RaZz! :P:
Radeon HD 6999 :D
Well, the FPS and Score numbers seem unaltered still.. Maybe it was ran on a HD 5970 which gives lower min and max FPS, so someone had to uppp them a bit?
First of all - Scores on the screenshot of "6870" for the shown "Avg fps = 36.6" - is correct. Secondly - it sure as hell was not edited in photoshop. It really looks like someonne has messed with Min and Max Fps in .html file but forgot to set text to "bold" as it is with "Scores" and "Avg FPS".
Anyway - this is just my thoughts.
More like, someone REMOVED the bold tags from the file. They are there by default. Again, makes absolutely no sense. For me it seems as if someone ignorant just messed with photoshop without knowing that there's a HTML file.
Why can't there be a checksum to validate the results... :shakes:
Results are valid. AFAIK Scores=AvgFPS*25,18. It doesn't take min and max fps into account. Maybe someone knows better. :confused:
And it would take hell of a time to fake this in photoshop with all those alignments... and not to notice the difference in boldness of the text?? no, it's just impossible. He edited .html and removed [ b ] [ /b ]" - 100%. by mistake. IMHO :)
ATI run: 922 / 36,6 = 25,1912568
Nvidia run: 743 / 29.5 = 25.1864407
So they seem valid, theres small rounding error(i'd guess at least) though.
However.. I believe that run wasn't ran with HD6xxx, but with something like HD5970, and the MinFPS and MaxFPS values were modified to make it look like better performer...
It might be fakes, it might not, but i tend to hope that they're real & be optimist, because that would keep the technology bandwagon moving forward & pushing the prices down one way or another. 229 US$ GTX 460 1 GB seems nice, but it would be nicer if there's a strong competitor from ATi (Bart based cards) that adds choices and perhaps knocks competitor price down a notch or two. :)
so.. that 68xx card is faster than two 5850's ?
fake or they did some magic for high tessellation..
so many assumptions, i give up worrying. when are we do for some real info strait from amd?
They are fakes or at the very least modified based on the evidence in this thread and other threads.
Keep prices down? AMD is selling whatever quantities it has of 5xxx. When the 6xxx gets released. There not going to be a lasting price drop on 5xxx products because they are going to be to or close to out of stock already.
If these scores are true, AMD will jack up the price of their card another 100 dollar up the line at the very least to reflect their performance increase. They are already selling out at their current prices. And there's nothings wrong with this, but AMD is not the saint of a company people make it out to be, nor is nvidia the devil.
But its going to be pretty scary for the consumer. 6870, 499 MSRP 600+street price. 6970 750MSRP 850+ dollar street price.
This is almost a certainty to happen in regards to pricing because these card are so underproduced and cards supply are so constrained that the consumer will pay way more than MSRP and along with the increased pricing, its going to be the most expensive generation of cards ever.
If this generation from AMD has taught us anything, supply can screw with the consumer just as much as any company.
The scary thing is if Nvidia doesn't have anything to respond with, which they won't unless they have something underwraps which is unlikely, the consumer is going to be overpaying till Nvidia get a new generation going. The worst part is a solution might not be in sight till perhaps even longer than 28nm considering how unscalable fermi seems at this point considering the size and performance for a new generation at this point. Considering Nvidia may be selling gtx 470-480 at cost, this is really bad for the consumer when a company has to sell a product below cost. It does horrible damage to a company as seen with AMD after the Core 2 duo generation.
http://i35.tinypic.com/1zg5ij9.jpg
Comes bundled with a copy of Duke Nukem Forever! :p:
fermi is fine. almost every chip in the last 10 years has been designed for process scaling. i wouldnt be surprised if they could get a 1024sp fermi on 28nm. they may do an Si spin and get a considerable improvement on 40nm too. i think GTC will reveal what they are planning.
AMD's losses were not only from inferior products but buying ATi and the TLB bug fiasco. nvidia doesnt have those issues.
amd leaked those screens, the 6870 is even more powerful than that.
everyone knows that.
Let me chime in with some rumours ive heard from somewhere. Believe me if you want to, else dont :)
Cayman will be the 6810/6830/6850/6870 (yes, four based on Cayman) to be released in November. The 6870 will be around 10 - 15 % faster than the GTX480, have clocks of 900/1500 and have 2GB memory.
Barts will be very similar to Cypress but on a new PCB and different clocks, maybe some SP's disabled and have 1GB memory.
Turks will be very similar to a Juniper but again on a new PCB and with different clocks (possibly higher).
Hmm.
how well do you trust your sources?
900 MHz core would be 2.76 TFLOPS on 1536 SPUs or 3.46 TFLOPS on 1920 SPUs. The latter would be amazing.
If I drink from "teh Nvidia cup," you drink from teh ATI fountain. :D
10-15% isn't all that much. Nvidia can make up that difference with driver optimization, as they still have much milking to do with the Fermi architecture.
They've done it before, many times.
You have the right to doubt these rumours/leaks, take a negative perspective, and be a pessimist about it, no problem mate. :)
But IMHO, all these bad pricing situation in the past 4-5 Q has more factors affecting it rather than just ATi being greedy or wanting to maximize their profit in the short term. This situation is IMHO quite similar with A64 X2 s939 situation of yesteryears, where ATi is limited in supply, that even if they want to slash prices according to cost/profit calculation & expanding the omnipotent marketshare against Intel the behemoth, they still can't. Especially in the graphic card bussiness where market share is very important considering game developers inclination of optimising more to the market leader's mArch design, i think ATi will certainly jump into more sales to grab more market share if situation permitted. ;)
Well, perhaps not like RV 670 & RV 770 experiences where they were quite desperate to maintain market share, not to mention expanding it, but the current situation is certainly not in the best interest for them especially in the long term, if they're not actually supply constrained. And then, you have to take account on materials price inflation, the higher price + low yielding TSMC 40 nm process, & the state of influx of US$ weakening. IMHO, if TSMC promise of significantly increasing their capacity in the 3rd Q 2010 comes true & can supply ATi's demand better, we'll see the situation of inflated ATi's card price abating in the next few Qs. :up:
My .02 centz on it, regards. :)
Whats to stop AMD from doubling up with SI or NI. If they can get more performance for less space, they will keep on getting better and better each generation.
Nvidia needs to get more performance out of the transistors they have and keep the size the same or lower. The simplest way to do this would be to up that shader clock if they can. It was originally rumors thought that this card was going to have a shader clock between 1600-2000mhz. If they can keep the core clock down while increasing the shader clock, this architecture will start to have legs.
The problem with this generation compared to the prior is that they removed the MUL operation which supposedly would not have a drop in performance(it was found in the gtx 280), however it actually did(or drivers still havent reached maturity). If the gtx 295 was clock like a fermi card, it would almost certainly be faster. Also the gtx 480 loses pretty soundly to SLI gtx 285. Per transister, fermi is worse than the gtx 280, which is pretty bad considering it is a new architecture.
The only thing I can think of to turn fermi around at this point is get the power down, up the shader clock and get those original TMU reenabled. Fermi needs 25% more performance at least to be considered a success and to justify its power consumption.
Oh you guys, I have Unigine so... see this. If you want to remove bold, you'd have to purposely remove obviously.
Code:<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html><head>
<title>Unigine benchmark result</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
<style type="text/css">
body { background: #1a242a; color: #b4c8d2; margin-right: 20px; margin-left: 20px; font-size: 14px; font-family: Arial, sans-serif, sans; }
a { text-decoration: none; }
a:link { color: #b4c8d2; }
a:active { color: #ff9900; }
a:visited { color: #b4c8d2; }
a:hover { color: #ff9900; }
h1 { text-align: center; }
h2 { color: #ffffff; text-align: center; }
.right { text-align: right; }
div.orange { color: #ff9900; }
div.highlight { color: #ffffff; }
div.copyright { margin: 20px; text-align: center; }
table.result { border: 0px; margin-left: auto; margin-right: auto; }
table.result td { border: 0px; padding: 3px; font-size: 200%; }
table.detail { border: 1px solid #b4c8d2; border-collapse: collapse; margin-left: auto; margin-right: auto; }
table.detail td { border: 1px solid #b4c8d2; padding: 3px; }
</style></head><body>
<h1><a href="http://unigine.com/products/unigine/">Unigine</a></h1>
<h2>Heaven Benchmark v2.1</h2>
<table class="result">
<tr><td class="right">FPS:</td><td><div class="orange"><strong>1234567890.</strong></div></td></tr>
<tr><td class="right">Scores:</td><td><div class="orange"><strong>1234567890.</strong></div></td></tr>
<tr><td class="right">Min FPS:</td><td><div class="orange"><strong>1234567890.</strong></div></td></tr>
<tr><td class="right">Max FPS:</td><td><div class="orange"><strong>1234567890.</strong></div></td></tr>
</table>
<h2>Hardware</h2>
<table class="detail">
<tr><td class="right">Binary:</td><td><div class="highlight">Windows 32bit Visual C++ 1500 Release May 21 2010</div></td></tr>
<tr><td class="right">Operating system:</td><td><div class="highlight">Windows 7 (build 7600) 64bit</div></td></tr>
<tr><td class="right">CPU model:</td><td><div class="highlight">AMD Phenom(tm) II X3 720 Processor</div></td></tr>
<tr><td class="right">CPU flags:</td><td><div class="highlight">3400MHz MMX+ 3DNow!+ SSE SSE2 SSE3 SSE4A HTT</div></td></tr>
<tr><td class="right">GPU model:</td><td><div class="highlight">ATI Radeon HD 4800 Series 8.762.0.0 1024Mb</div></td></tr>
</table>
<h2>Settings</h2>
<table class="detail">
<tr><td class="right">Render:</td><td><div class="highlight">direct3d10</div></td></tr>
<tr><td class="right">Mode:</td><td><div class="highlight">1920x1080 4xAA fullscreen</div></td></tr>
<tr><td class="right">Shaders:</td><td><div class="highlight">high</div></td></tr>
<tr><td class="right">Textures:</td><td><div class="highlight">high</div></td></tr>
<tr><td class="right">Filter:</td><td><div class="highlight">trilinear</div></td></tr>
<tr><td class="right">Anisotropy:</td><td><div class="highlight">16x</div></td></tr>
<tr><td class="right">Occlusion:</td><td><div class="highlight">enabled</div></td></tr>
<tr><td class="right">Refraction:</td><td><div class="highlight">enabled</div></td></tr>
<tr><td class="right">Volumetric:</td><td><div class="highlight">enabled</div></td></tr>
<tr><td class="right">Replication:</td><td>disabled</td></tr>
<tr><td class="right">Tessellation:</td><td>disabled</td></tr>
</table>
<div class="copyright"><a href="http://unigine.com/">Unigine Corp.</a> © 2005-2010</div>
</body></html>
This is only relevant if you take Fermi as a gaming GPU.....which it isn't.
Fermi was designed for both HPC and gaming.. The HPC market is much more profitable than the high end gamer market (and from the benchmarks I've seen, Fermi decimates ATI in that area), so ATI leading in perf per mm2 isn't as important as you'd believe.
WG_Baby claimed that he never said those screenshots were legitimate.Quote:
Remember, the Cayman is a different architecture as well. If Fermi can be tweaked for more performance (which already recieved a perf boost), Cayman can be as well.
Well, 15% is still an approximate from them (Im thinking conservative). Might as well be 20%, they are still trying to set the clockspeed straight. Initial target as far as I know was 950Mhz but its likely that it will end up at 900. Also, I frankly dont remember if I was told 512SP Fermi or 480SP, gotta check..
Also, the four different SKU's are faster than a reduced Cypress (Barts). There should be atleast be a 40% difference between that and the top end Cayman. That leaves 3 to fit in the middle somewhere I guess.
One thing is for sure though, the reduced Cayman versions look to be able to clock like a madman. They are likely to have some SP's disabled though, but they certainly look to be great bang for the buck cards.
Finally, as many places have already noted, it will be AMD Radeon :)
:rofl:
yet thats what they are selling 90%+ of fermi chips as... but nOoOoOo its not a gaming chip... pff... that would be silly... :lol:
sounds the standard bad looser excuse we hear so much these days "if you lose, claim you didnt really try to win to begin with"
According to nApoleon@chiphell,the Vantage P and Unigine scores are legitimate.
http://www.chiphell.com/thread-119587-1-1.html
Quote:
某卡(平台未知)
3DMark Vantage P24499
Unigine Heaven (1920 1200 4AA+16AF) 36.6
同平台下GTX 480的Unigine Heaven (1920 1600 4AA+16AF)为29.5
nApoleon 发表于 2010-8-30 11:38
But as I said, the HPC market is far more profitable than the gamer market, so increasing the HPC capability of your GPU leads to more profit than just focusing on gaming only.
Just think, how much does a high end Quadro GPU cost? Thousands of dollars....for ONE!
Increasing the HPC and Scientific capability of their GPUs while retaining their ability to be used as primary gaming GPUs, is one of the best moves Nvidia has ever come up with.
Because of Fermi, Nvidia has strengthened it's hold on the HPC market, which like I said, is inherently much more profitable than the gaming market.....even though less GPUs are sold.
They are for gaming. I should have been more specific, but I thought I clarified in my following sentence:Quote:
funny you bought 2 gpus that arnt even for gaming, lol
Fermi was designed for BOTH gaming and HPC, and as such, it's pretty amazing since they are the top performers in both areas.....at this time.Quote:
Fermi was designed for both HPC and gaming
There is a increasing need for HPC which amd also will adress most likely in the upcoming 6000 series, but also down the line with new generations as the market expand.
the oem with fusion makes sense a lot, no extra videocard or such features on the motherboard allows the cheaper things to be build all the things a oem like.
I see amd in a better position than Nvidia after they got things togeheter.
AMD still lacks a good PR and communicating department but they havent hired me yet.
;)
:ROTF:
Any long term Nvidia user will tell you that it's nothing for Nvidia to squeeze about 15% extra performance (average) or more out of their GPUs between the introductory drivers, and the fully optimized drivers; which may take about a year or more to realize..
In Fermi's case though, it may be sooner (or later) because the GPU is radically different compared to previous generations.
The 200xx drivers have already given significant increases in performance after only a few months.
I never said Fermi wasn't a gaming chip :rolleyes:Quote:
yet thats what they are selling 90%+ of fermi chips as... but nOoOoOo its not a gaming chip... pff... that would be silly... :lol:
You just selectively read one sentence of mine, instead of looking at the entire post. I said that fermi was designed for BOTH gaming and HPC..
Do you honestly believe that Fermi is a failure? If so, then you are truly deluded.. :down:Quote:
sounds the standard bad looser excuse we hear so much these days "if you lose, claim you didnt really try to win to begin with"
If AMD genuinely desires to address the HPC market, they will have to invest more transistors into their designs for dedicated HPC function....which will mirror what Nvidia has done with Fermi.
Of course, the AMD fan boys will herald this as the greatest thing ever even if the power envelope is increased dramatically, much as what happened with Fermi :rolleyes: