That's what I thought. So how can the 3rd graphics card improve the 3dmark06 score?
Printable View
rd790 mobo's in september :eek:;
i cant wait for rumours to be put to rest one way or the other.
Yep. I'm not going to agree that they used 3 GPUs, because the Phenom rig photographed at GC 07 had 2.
Anyway, ATi developed a way to where the workload is spread across the 3 GPUs, each card generates different parts of the images, then the data is transferred through the PCIe slot or CrossFire connector (can't remember which, but it's one of those) to the master card (the top card, not the master card with the stupid dongle) where the images are put together and sent to your monitor.
Because it can increase the performance of the already fast Crossfire with Dual GPU. Folks are also using these to do CPU work.
http://www.bit-tech.net/news/2007/07...ed_ppu_cards/1
But the other rig did have 3. I didn't see this one and I'm not sure what the setup was. Yet, that what the problem is here to start with. In 11 days we'll know. Some will say AMD was smart no matter what happens.
If it's fast, we see;
See, AMD is smart to have held this back, it rocks.
If it's slow we'll see;
AMD is smart to have hid this performance from the market.
What K10 does though, it will be great that the hiding and crap will be over at last. Maybe folks will be nicer?
Hey! Who's thread crapping by rating this thread bad? We've actually got a serious, good discussion about Phenom going on :D
Wow its looking better by the day :D
I just hope they don't have too many yield problems with that huuge die and they can get it to scale past 3 ghz.
it's true as i was stationed just on the next bench table doing our stuff :D
http://www.tweaktown.com/news/8068/w...ghz/index.html
that's kayl with the chicks
i did the benching of course hhahahahah
heh, i just noticed inquirer disabled the e-mail page. i'm sure someone will share his email addy so that we can spam it to death if this turns out to be bull :p:
I have a better question. How the hell is 'Trifire' even supposed to work with cat 7.7? It seems odd that such an old driver would support the new Crossfire, no? Also, correct me if I'm wrong, but Trifire has only been showcased on 3GHz systems, right? IIRC this system was a 2.5GHz system that got OCed to 3GHz.
Time for analyzation:
From HKEPC:
http://www.hkepc.com/bbs/news.php?ti...me=0&endtime=0
-Dual Crossfire gives 1.8x speed of single 2900XT
-Tri-Fire gives 2.6x speed of single 2900XT
1.8x/2.6x=69.2%
69.2% of 30k=20769
A more reasonable score.
BTW, one of AMD's presentation shows IPC improvements of 15% over K8, so I doubt it was done with dual x-fire.
http://www.ptcinnovationforum.ru/documents/AMD.pdf
English version: http://www.sunmicrosystems.se/virtua...f_Nordlund.pdf
Could it crack 30,000 with an instuction set like say, SSE5? :) :) :)
Don't know , I expect it to be 12-14 cycles.The L3 is slow because you need arbitration and that eats plenty of cycles for 4 cores.
The solution is to make it large ( slow L3s are typically 16-64MB ) but the die is already huge.
You can imagine the L3 trashing when all 4 cores chew along different datasets...
That's very debatable ; large&slow vs small&fast is a never-ending debate.
You need to take into account that the L1 is also only 2 ways , which kinda sucks.
The problem here is die size ; even if they wanted they couldn't put more cache on the 65nm version.At 283mm^2 it looks to be beyond what AMD's FAB can muster , at least for now.Yields are probably very poor , there is a lot of device variation which prevents them for shipping stable higher clocked parts.Quote:
Also, Shanghai (the 45nm successor for Barcelona) still has the same L2 cache size, while the L3 cache is expanded to 6MB.
Because of this, it seems that the AMD engineers didn't think the size of the L2 cache in K10 was going to limit it's performance, otherwise they would have addressed it in Shanghai.
On 45nm there are extreme problems awaiting AMD ; you can check for yourself here :
http://www.semiconductor.net/article/CA6464480.html
This reply explains it in great detail :
Quote:
Submitted by: Sang U. Kim, Ph.D (kimsang@wbhsi.com)
8/1/2007 2:29:13 PM PT
Location: Pheonix, Arizona
Occupation: retired, but active tehcnically
One of the major differences between the SOI and Bulk technology for the 45nm and beyond is to control the electostics or the short channel effects. For the bulk technology used by Intel the quantum confinement of carriers is controled by a combination of Hallow source/drain implant, and retrograded channel/substrate doping. On the other hand, for the SOI technology the quantum confinement of carries in inversion layer is carried out by physically reducing the SOI thickness, Tsoi by narrowing the space between the gate oxide and the buried oxide. To mitigate the short channel effects, 45nm SOI may require 50nm~40nm Tsoi, 30nm~20nm Tsoi for 32nm, and 10nm or less Tsoi for 22nm technology. Such a thin Tsoi causes significant carrier mobility degradation and increase in threshold voltage, Vt. Furthermore, for the scaled devices, the strain induced mobility enhancement techniques become less effective. This is particularly more so for the thin SOI technology simply because in such a thin ~10nm junction and isolation depths, and channel inversion layer thickness it is extremely difficult to implement GeSi S/D junctions and a large lattice mismatch induced by the relaxed Ge-Si substrate in the channel.
Even for the 45nm SOI technology, the manufacturability of the strain induced mobility enhancement techniques used for 90nm and 65nm may not be feasible. In this respect, the SOI technology for the 45nm and beyond has a significant disadvantage over the bulk technology. IBM and AMD are at the crossroad today to determine extenderability as well as manufacturability of the SOI technology for 45nm and beyond. The conversion from the SOI to the bulk 45nm technology node has enormous technological and manufacturability challenges. This is because IBM and AMD do not have the required learning experiences such as process, design, reliability and device yield gained from the 90nm and 65nm bulk technology development and mannufacturing. Furthermore, two major new materials were introduced in the bulk 45nm technology: the thermal oxide, SiO2 that was used for 40 years is replaced by HfO2, and the polysilicon gate that was used for over 30 years is replaced by the metal gate.
Today Intel is the only company that is manufacturing the bulk 45nm. If that is true, Intel has enormous advantages over its competitors, particularly if IBM and AMD have to adopt the 45nm bulk technology. This is because Intel must have resolved most of the device, process, reliability, and manufacturability issues as a result of introduction of the new materials and processes. When the new materials and processes like HfO2, metal gate, and their new processes are introduced, new or unknown faiure mechanisms will be also introduced. Therefore, it is crucial to design test structures so as to bring out the unknown failure mechanisms for early detection, and develop effective E-test and reliability test screens. Such experiences gained through the 90nm and 65nm bulk technology development cycles will give an edge to Intel in successful development of the 45nm technology and beyond.
I think what everyone forgets to factor in is that with much higher PCIe bandwidth and more CPU power you can utilize the gpu better. If we take kinc's score and add 1000 for SM2 and SM3 (ok.. maybe a little too much), we only need 11100 in CPU score.
SM2: 11581
SM3: 12798
CPU: 11100
= 30032
nice triple post shintai, you know the edit button?
or lower resolution. ;)
[QUOTE=savantu;2400130]Intel and AMD know; according to Intel the difference is negligible.AMD OTOH claims native is better , doesn't say by how much.
I expect the truth to be in between.
[quote/]
Well just look at the other benchmarks thread, that'll explains it all. Native is better for HPC solutions, which the K8 and K10 are designed for. Desktop users won't see much difference because there aren't many mainstream software that are multi threaded.
Shared Cache is not the definition for native multi cores. It's a design choice.Quote:
Let's look at the current X2s and Conroe.We can call the K8 X2 as the best MCM-like aproach; 2 cores which are connected by an outside link.
C2D uses a shared L2 , the most "native" aproach.
Shared Cache is good for single threaded software or when you're running benchmarks. In real world situations where you use multiple software at the same time, or better known as multi tasking you'll find dedicated L2 Caches more efficient.
You can see this in Quake 4 benchmarks. Conroe's performance is hit by a larger margin compared to the X2 when you enable SMP. That's because the shared Cache suffers from "cache conflicts". Dedicated cache will not have that problem. It's just a choice and does not mean it's more native than another.
Trust me on this, the K8 does not only care about latency, the K8 simply can not handle higher bandwidth, than a given max. Right now DDR2 has gone over that line. Seek it up in AMD's document if you don't trust me.Quote:
Time and time again K8 showed it was in no way bottlenecked by its memory sysytem.It only cares about low latency.
I don't think it's small, the IMC will take care of that.Quote:
K10 major flaw is in its cache size ; 512KB is small and the L3 will be a bottleneck.Also its latency if true ( 38 ) is horrible compared with K8/C2D/Penryn ( 12-14 ) .Latest reports put Penryn's massive L2 at 13 cycle latency which is nothing short of astounding.
Well just take a look at the other Benchmark thread, you'll see most of my statements confirmed. See in Cinebench you can clearly see that professional software can make a huge difference. A native Quad Core does look to scale better than an MCM solution agreed?Quote:
Umh , not exactly.The changes made will offer sizable benefits for existing code.
http://www.hardocp.com/news.html?new...VzaWFzdCwsLDE=
So he thinks 3GHz K10 would do ~22.5K.Quote:
As for what the noted system would score in this popular benchmark, we would have to inform you that the score Theo saw was about 30% to 35% inflated. And while I wish it was going to work out to be true for the gamers and hardware enthusiasts out there, it is just not going to happen. You can take that to the bank. Phenom FX is going to be a great product that will be extremely competitive with current Core 2 architecture, but let's not inflate our expectations artificially.
[QUOTE=TigeriS;2402324] K8/K10 scaling is and will be better than Core 2/Penryn because there is no FSB and cache coherency traffic is handled by HT links. However raw performance is outstanding for Core 2 so the scaling degradation doesn't matter so much. Yes, when both cores access shared L2 the bandwidth to both cores drops, but the large L2 ensures a good hit rate from the L2 cache, meaning slow RAM accesses are limited. Running dual superpi or F@H SMP core 2 is much faster than K8, and SMP F@H datasets run near 768 MB at max. Fast RAM is also pretty much useless on Core 2 unless you overclock the FSB. Then it really starts to fly.
Looking forward to seeing K10 in action with it's 32 byte fetch and improved BPU/OOO and floating point units even if it does clock at only 2-2.3 GHz. Intel hasn't exactly been quickly ramping clockspeeds on Core 2 duo recently.
If anybody saw the thing on [H]ardOCP on the thread they have for this, don't pay attention. I'm becoming more and more suspicious that Kyle and all of [H] is being payed off by Intel and NVIDIA.
How can he say something like that with no official confirmation that it is true or false. Rather than claim it's all false, he could have taken the neutral standpoint and explained it could be true because... and it could be false because...Quote:
There has been a lot of talk in our forums about the Inq posting a story noting 30K+ 3DMark 06 scores with a Phenom FX / Agena core AMD next-gen processor. Let me just call a spade a spade here and tell you that this story is wrong. A 3GHz “K10” and 2900 XT CF system as noted below is NOT going to give you a 30K score in 3DMark06.
The particular processor was none other than a single socket Barcelona or Agena FX, call it what you will. The reference motherboard containing RD790 chipset packed two HD 2900XT cards, and the memory installed was Corsair’s Dominator PC2-9136C5D, or the same ones we have been using ever since they came out. There was a Raptor hard drive, and that was about that. OCZ will like the fact that PP&C Quaddie CrossFire PSU was installed in the system.
When clocked at 3.0 GHz and equipped with two overclocked HD2900XT cards in CrossFire, Agena FX or single-core Barcelona smashed an index of 30,000 3DMarks 06.
I know Theo Valich, the author of this story quoted above. While he might have very well seen what is being reported, it is not going to be reproducible in a controlled test environment. I have to think that someone was pulling the wool over his eyes. As for motivations, who knows?
As for what the noted system would score in this popular benchmark, we would have to inform you that the score Theo saw was about 30% to 35% inflated. And while I wish it was going to work out to be true for the gamers and hardware enthusiasts out there, it is just not going to happen. You can take that to the bank. Phenom FX is going to be a great product that will be extremely competitive with current Core 2 architecture, but let's not inflate our expectations artificially.
I think your'e wrong, and I must disagree with you this time.
Kyle Bennet is under NDA and he has already seen several Barcelona benchamarks(search for the Barcelona demo in San Francisco).
He actually said a few times that Barcelona\Phenom is going to end up faster than Core 2 and that he can't say much more than that.
So it would make much more sense to take his word for it over the highly unlikely Inq report, because there are some things he knows, which gives him a much better idea\estimate of Phenom's strength.
It's also strange, because people have always accused him of being an AMD fan, not the other way around.
But in any case, we should know soon.
The only "semi-official" info we have for now is TechArp's quote of 20-30% better on average than intel's stuff and Gary Key's(Anandtech) that the K10's results at coolaler's forum are bogus.That's all we know from people who are under NDA but said something other than "it's BS " or "it's not BS".
Kyle was NOT in SF...
In fact he posted BS about Teraflop in a Box even though other techies who were actually in SF and had a Q/A with AMD were stating different numbers.
Kyle & Co. had decided a few months before the HD2900XT was released that is was a flop, so they decided to report it that way. I believe they are STILL using an early sample for their benchmarking.
There are too many reasons to list but Kyle & HardCrap do seem to be WAY too bias towards Nvidia right now, I haven't seen too much pro-Intel from them but that is because I can't handle their BS anymore.
You can of course come to your own conclusions but guessing that they are in Nvidia's pocket isn't too farfetched...
Ummm.. Really?
http://img168.imageshack.us/img168/9932/1kylelc9.jpg
http://hardforum.com/showthread.php?...ight=party+amd
In fact not only he was there, he was also the the first one to advertise the event and send out invitations to the other [H] forum members.
Original Post by Kyle
I didn't say anything about ATI/Nvidia at all, so I don't see why did you bring it up. Besides, I didn't say anything about the man myself, just stating what I saw at other places.Quote:
In fact he posted BS about Teraflop in a Box even though other techies who were actually in SF and had a Q/A with AMD were stating different numbers.
Kyle & Co. had decided a few months before the HD2900XT was released that is was a flop, so they decided to report it that way. I believe they are STILL using a early sample for their benchmarking.
There are too many reasons to list but Kyle & HardCrap do seem to be WAY too bias towards Nvidia right now, I haven't seen too much pro-Intel from them but that is because I can't handle their BS anymore.
You can of course come to your own conclusions but guessing that they are in Nvidia's pocket isn't too farfetched
Later...
After Teraflop in a Box was showed off...
He was in SF for a launch party hosted by AMD which obviously didn't do much since he was still posting BS in the review.
Don't you find it slightly suspicious as to why the HD2900XT review rev1 had a few Nvidia slides in it? One of which was the power consumption slide which of course was just slightly bias...
Don't understand why you are telling me stuff I already know...
Did you look at my link? That was 2 months before the launch event.
Comeon now...
Edit- You had said this-
Which is false since Kyle was not in SF for the Barcelona demo, that was in March.Quote:
Originally Posted by Face
He eventually did see a Barcelona demo in May but that was at a launch party, not what you were talking about originally.
I have no idea what you are talking about.
The "AMD is throwing a party in SF!" was the event I was refering to all the time. where did I mention another event? as I said, I have no idea what you are talking about, or about another SF event which I did not mention.:shrug:
And I still can't see why you keep bringing the HD2900XT subjebt back. Here's what I said :Not a word about the HD2900XT, his reviews, or my personal view if he's biased one way or another. "people have always accused him...", not me, and by saying "not me", I do not mean to say he's not biased, or that he's neutral or biased for X Y or Z. I simply don't care enough to find out, Ok? I was merely trying to let people know what others think of him based upon what I saw.Quote:
He actually said a few times that Barcelona\Phenom is going to end up faster than Core 2 and that he can't say much more than that.
It's also strange, because people have always accused him of being an AMD fan, not the other way around.
Please see my first Para. I was only refering to one event, and never talked about other demo's.Quote:
Which is false since Kyle was not in SF for the Barcelona demo, that was in March.
He eventually did see a Barcelona demo in May but that was at a launch party, not what you were talking about originally.
Hope you understand,
Oh... so you didn't say this-
This-
Nor this-
:confused:
1)You told him he was wrong.
2)You talked about a Barcelona Demo in SanFran, that happened in March not May...
To clarifiy, again-
March = Barcelona/R600 demo (Teraflop in a Box)
May = R600 launch party to try and bring back HCrap to good terms. AMD decided to bring Barcelona along in a R600 rig, that wasn't the reason for the event.
3)You then made another comment about how strange it is to believe he is bias against AMD, implying he was wrong.
I'm sorry if it seems like I am flaming you, I am just trying to clear up some misunderstandings.
Yes this has nothing much to do with you talking about Barcelona but more to do with you implying Kyle & Co. have no bias.
I'm sure Kyle has seen some interesting demos and is under a NDA but because of recent events, I cannot take his word at face value anymore or at all really...
I'll try to keep it short, and this will be my final comment on this issue :
1. I said I disagree with him, based on what I've seen other's say. This doe's not imply I think Kyle is not biased.
2. I talked about whatever demo it was in SF that Kyle Bennet attended. I did so from the beggining.
3. I'll repeat myself once again, although it's getting redundant. I said 'strange' because I saw many people say the opposite. This does not imply I think he's not biased.
You just don't get it..?Quote:
you implying Kyle & Co. have no bias.
OK, I'll say it clear and loud, once again:
I really, truly, honestly don't give a dirty, filthy hogwash drop of care, If Mr. Kyle Bennet is biased towards A, or biased towards B, or is unbiased to ABC.
I have zero interest to defend Mr. Bennet or to "imply" he's biased, not biased, or whatever, towards _anybody_ .
I don't care. Really. Honest.
This is none of my interests. please understand that it was really not my intention was not to discuss about Bennet's objectivity.
if you want to argue about this, please, don't do so with me, as I simply don't care/don't know/don't care.
Thank you for keeping the thread civil :), And know that we really have nothing to argue about.Quote:
I'm sorry if it seems like I am flaming you, I am just trying to clear up some misunderstandings.
Now, please, let's stop this OT right now, and get back and discuss what's the thread really is about.
Thank you,
if that 3dmark benchmark did happen, do you think Theo is blind and he
confused 30k with 20k displayed on the screen? Either 3 cards were used
either the k10+RD790 is that good. The former is by far more plausible tho.
Why people here are still guessing about 30k mark when we can work out 23k mark on stock Phenom 2.5GHz with stock CF?? That number is much more realistic and easier to compare....
:clap: Kyle's this.....Kyle's that.....event this.....BS that....what a complete and utter waste of time this thread has become. Little girls bleating about how much they know about ppl connected to Barcelona etc etc.
You guys seriously need to get a grip of reality. I mean some of you are just starting to take things a little too far when it comes to a benchmark thread.
And to think, it's all because AMD's K10 raped INTELS little 3D MARK '06 record :rofl: :ROTF: :clap:
Intel users........:welcome: to a REAL quad-core environment :up:
^^^ :lol:
ok im thinking this thread tops out at 30 pages before we get any real info.
place your bets. :)
Are you cold? Trifire!
Post #473 is one of the funniest things I've seen in a while! :ROTF: :ROTF: :ROTF: :up:
Some of what K10 will be up against!
http://www.theinquirer.net/default.aspx?article=41868
Not just for show since this baby features two FSB's.Quote:
The board has FOUR PCI-E x16 GPU slots and, of course, strong power/VRM config with overclockable Dual FSB config on top of the unlocked multipliers. All the chipset and VRM heat sinks are easy to replace for optional water block/freeze block cooling.
Knowing the 4++ GHz air cooling overclock headroom on these 45 nm parts, and even higher liquid or cryo cooling potential, and coupled with fast FSB & memory - I thought "Fast FB-DIMM" was an oxymoron - we could be in for some record results when the monster E-ATX board shows up in the market in the next few months.
After all, an extra 20 - 30% speed boost on CPU, and additional 30+ memory throughput due to optimised FSB latency coupled with both higher bandwidth and lower latency by those upcoming FB-DIMMs, should give a very decent application performance increase, not to mention ability to feed, say, four Nvidia G92 GPUs simultaneously?
When is that board due to hit already?;)
QFT
but it isn't amd fanboys only, just take a look at Donnie
WOOT THIS WILL BE UP AGAINST 4 X16 SLOTS WOOOOOOOOO INTEL IS THE PWNAGE, MY PEN** GOT 2 TIMES AS LARGER AFTER I READ THIS NEWS ABOUT INTELS 1337 PLATFORM WOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO AND IT GOT 2 FSBs COMPARED TO THE CRAPPY HT LINKS AMD GOT ZOMG
just lock both barcie threads, they're full of garbage :up:
EDIT: i like the 2-thread system amd fanboys are in their world in here and the intel ones go to the thread about the results on the coolaler page, and both are untrustable...
Then you can't read and have a vivid imignation as well, that's a bad combination;) Maybe you'd be more confortable at the AMDZone where you Green Team AMD Fanboys can live in peace in your own little Alternate Reality. Please note? You don't see me posting in the AMD section of the forum. If I bother you, please go there and have a peaceful night of blind bliss:) Looks like you're the only one desparately seeking to get this thread closed. What, did the V8 give you heartburn?
Yep, FB-Dimms and poor scaling is certainly going to give someone a LOT of heartburn.
I thought this was a K10 thread...
And whoever said something about all the AMD "fanpois" might want to actually read the thread. Just as many Intel spammers are here as AMD fans.
Yep, post 473 is one of the best posts ever.
Batti Banter, made a memorable one here;
http://img190.echo.cx/img190/5120/banana7pg.jpg
http://www.xtremesystems.org/forums/...ad.php?t=61953
^ :eek: where the banana hand go to?
No Censorship is allowed LOL! Just as posting I said something I didn't:) The comparison was made between Intel and AMD.
Still wondering where the subject came from? Again learn what spamming is before you misuse the word again;)? I'd not waste my time posting in AMD portions of the forum where folks have their heads buried in the sand hiding from reality. Again, AMD thread doesn't mean lie or spin for AMD. What's wrong with you guys?Quote:
Quote:
Originally Posted by Piotrsama
Conditions of the benchmarking are not clear.
Quote:
Originally Posted by BrowncoatGR
just annoying how much crap gets posted in all threads over intel and amd, 50% are flame only (or posts that lead to flamewaars) and 49% contain no information, the last 1% is made by the mods so....
who lied for amd? no one in here lied, people just placed their opinion about the new procs, and you came in and posted some info about a comparison between V8 and 4x4 which both have nothing to do with amd
the 4x4 board will be replaced by an rd790 board and v8 also got no future :shakes:
lol nice link. it's just a plain server board where you already have dual 1333fsb for 1,5year just added some pci-e lanes. The fun part on skulltray and magic marketing Intel with there nice v8 is it still needs fb-dimm (even worse than the 4x4, most of it thx to the nice powerhunger nvidia chipset). You'll see the difference when 790 is there.
so to me it looks like you better stay at you're desktop system and basic school, because for sure you don't know what is going on the market for sure not in the server world.
so a hint: keep reading, stop posting
That happens when your daily language is Dutch and not English.