Hell yeah :up:
Printable View
I hope this is true, but we'll know soon enough, Sept.10 is the Bacelona launch right?
Put's mod hat on while shaking head:
Gentlemen:
I just check my mail and 4 people have reported different posts here.
Same old thing..The AMD guys and the Intel guys at one another again.
Why guys? Most of you appear to be way too smart for this so why fight over what is really a little thing?
If they believe the AMD will do 30K then fine, thats someone's opinion..
Sounds really high to me personally as didn't I just read that Kinc and Shamino did 27,000 at app 5000mhz on an Intel?
30,000 on air at 3000mhz on the AMD sounds awefully high wouldn't you agree?
Now I'm far from any expert on 3DMark but thats common sense speaking.
The only thing I ask of you is that before you post, look at the post and read it as if your in the "other" side of this issue and don't attack individuals.
Calling someone an idiot or a retard just because they disagree with you is a slam against yourself not them..
Keep a perspective please..
Last: Aren't we supposed to be friends here sharing a hobby that we all love?
QFT.
And Craptacular, this is one of those times where you observe, absorb, and ignore. Your argument is a proven fact and is 100% true, while his is a matter of opinion and skepticism that is needless. At any rate, neither of you are going to budge so just forget about it...
LoL, MOvieman. It's geek love...which ain't no normal thing. :rofl:
I see K10 take some internal parts and go from 64-bit to 128 bit. I see some nice new cache.
I see RD790 has some cache in too, like nForce used to, and noone could beat.
I see lots of strange things...so 30k, nah, not so hard. I wondered why 2900XT had pci-e 2.0 powqer connectors...and why people were up in arms recently about current cards NOT being pci-e 2.0. Then thought of nV with new refresh, and AMD without...and no, i don't see dead people, thanks! ;)
I love this hobby as much as the next guy. Where I differ from what I see in this and other threads is my willingness to look objectively at the hardware.
Good is good and I don't care whose name is on it.
Right now there's 2-8 core clovertown machines in my house.Why? Because I thought they were the best available for the work I do.
If the new AMD's prove Superior you can bet that they will have a new friend beside them and they will get along or I will take a whip to them!:D
Had a dual Opty 265 here for a while..Immense machine at the time. Beat the crap out of my DX3600 machine..Granted 4 cores vs 2 but they were comparable technology and timeframe..
Yea...I must say, i see some fish ran off with the hook, i wonder how you get that hook back? If it was some special metal, should not be hard to find.
So, you think about that hook, and how much it meant...you cry a tear...and then you tell your friends, in rememberance, maybe have a drink, and reflect back on the good times.
Takes a special hook to catch a special fish...
Actually, it WAS rhetorical. 'cause you did not see his original post(which hit my email, BTW), nor have you played with as much hardware as I have. Fact of the matter is as he says, few boards have proper memory remapping. P5WDH, for example, starts remap after 2GB. P5K, after 3070MB...becasue it does not differentiate between pci and pci-e in it allocation. And yes, i expect him to at least confirm what I'm saying, becasue i don't mind looking like a fool, especially when I'm right. IN fact, I don't mind posting a DXdiag to show it either.
Sry for posting this more than once, but let's get this out in the open.
Enough crap. Let's show the official spokespeople.
AMD's Quads to outperform Intel's by over 40%
http://www.youtube.com/watch?v=G_n3wvsfq4Y
3GHZ demo is TRIFIRE:
http://www.youtube.com/watch?v=R7EZmYth6TM
So, unless these people are in fact lying(hence the Fudzilla comment yesterday), maybe it is far more possible than most think, and then again, maybe not. :rofl:
And i gotta get off the internets.:yepp::horse::yepp:
cadaveca, the link to the first video is incorrect
The second one was kind of meh.
I understand it's part of the business but having her state the obvious after you've had your camera jammed into the case would have annoyed me.
Here is another.
In this vid they mention that there will be a dual 1207FX board based on the 790 chipset ;)
Hopefully there will not be any bios skulduggery that will prevent hte use of Barcelona's.
http://www.youtube.com/watch?v=6EW-hvLd_Yg
The architecture is different, L3 cache etc. and everyone knows that when you overclock a processor it does not necessarily scale well... Explaining the disparity between a 5ghz c2d and a 3ghz Barcelona
It is within reason that Theo is telling the truth.
If it had 3 or 4 gpu's without his knowledge that would explain it as well.
Of your 23 total posts you've made, you decided to waste one on that :horse: Thanks for populating this thread with eve more :spam:
And my suspicions are confirmed if this is what they used as was posted above...
http://www.youtube.com/watch?v=R7EZmYth6TM
now that seems entirely plausible, 3 2900XTs with a 3Ghz quad core may be able to do it. Now I know that the people at INQ are all retards...
Release the spam bots!!
You don't know the circumstances of the test.
From the pics I have seen it could be plausible that Kinc and Shamino lured the female attendants away from the rig with their irresistible 'charms' while Theo dropped behind the keyboard surreptitiously to have his way with the rig.
:D
Obviously time was a commodity of limited supply.
lol...
re·tard /rɪˈtɑrd, for 1–3, 5; ˈri-tɑrd for 4/[ri-tahrd, for 1–3, 5; ree-tahrd for 4]
1. to make slow; delay the development or progress of (an action, process, etc.); hinder or impede.
–verb (used without object)
2. to be delayed.
–noun
3. a slowing down, diminution, or hindrance, as in a machine.
4. Slang: Disparaging.
a. a mentally retarded person.
b. a person who is stupid, obtuse, or ineffective in some way: a hopeless social retard.
5. Automotive, Machinery. an adjustment made in the setting of the distributor of an internal-combustion engine so that the spark for ignition in each cylinder is generated later in the cycle.
Did they show the 3D mark settings, i.e. resolution, AA, AF...?
Thanks @ CraptacularOne :)
Learning everyday.... :)
unless the cache in unlinked from the processors, an increase in switching speed should result in a rather linear increase in overall CPU performance. Whether or not the memory subsystem and the system bus scales well enough is a different matter but hey highend DDR3 + a well OCed HTT bus probably scales well enough as compared to mid-highend DDR2 and a traditional 2GT/s HTT bus
Admin powaaaaaaaaaaaaaaa !!!
oh MM with admin hat nice one.... where did you send him on vacation?
The moderator speaketh!
Now, with the mod here, things will be nicer, no? :D
Thank you!
I think Theo is telling the truth, but he probably never knew the correct specs of the machine he was benching on. The only two possible answers imo is that it was run in trifire or in a less than native resolution. 30k on air is probably impossible if we take what the x.i.p's say at face value. However i would love to be able to recomend and build AMD boxes for highend customers again.
I went to the Barcelona presentation and asked directly to the presenter about this. He confirmed the integer divider limitations were solved.
I'm prety sure we'll have 1MHz step memory clock adjustment all the way up to about 750MHz (DDR2-1400).
Also, the IMC received a deep upgrade (larger write bufer, dual channel is now two independent 64bit channels, DDR2-1066 official support, the data path to the northbridge was extended to 128 bits, and so on).
30k still seems unreal, but I think it will be at least 10% faster per clock than Penryn in general application, and way more in highly multithreaded workloads.
Means absolutely nothing. Most of us don't believe anything Intel or AMD says without proof.
3DMarks has always proven its uselessness for anything other than like platform stability testing and etc..... Since it doesn't correlate to any real-world game/s, trying to use it as a standard is misleading to say the least. Please note, I said the same thing when P4 was winning*.Quote:
I'm prety sure we'll have 1MHz step memory clock adjustment all the way up to about 750MHz (DDR2-1400).
Also, the IMC received a deep upgrade (larger write bufer, dual channel is now two independent 64bit channels, DDR2-1066 official support, the data path to the northbridge was extended to 128 bits, and so on).
30k still seems unreal, but I think it will be at least 10% faster per clock than Penryn in general application, and way more in highly multithreaded workloads.
How much faster is K10? I don't think there will be a single Performance number advantage. Some apps it might be 25% faster, other apps might show it as being 20% slower.
You are back in this topic now it seems.
Anyhow,2 weeks to go until we find out the truth.
Based on TechArp article,K10 will have 20-30% advantage against intel's counterparts(clock /clock) with excesses of 170% advantage in rare cases.
Sounds better than C2D did against K8.
Hmmm...
K10 x RD790 x X2900XT CF x engineered and optimizated from the ground up within one Co. that has a proven track record for innovation = 30K 3DMark06? It's not impossible!
Hey... get away from my dream... it's all mine... *walks away laughing sinisterly*
Chances are he probably did see the 30K score, the problem is he's not sure of the test details or the specs.
All the demos we have seen of the machine include a Tri-fire system.
How much extra pump can we expect from a tri-fire Vs. a 2 card crossfire?
Also someone requested 3D06 marks on different resolutions on same system.
I would like to see such a comparison to see how much of difference test resolution makes.
dude its the inq. And this kids laptop was stolen? by whom the NDA thugs?
at any rate K10 is gonna be sweet, and so will penyrn/wolfdale
but yeah that had to of been mark05
I would like to see AMD gain ground next year
I'm not a cpu designer but I do however know a lot about the process of cpu design and manufacturing. It's my hobby. I love technology and innovation. I dislike "patches" and workarounds (all to an extend of course).
Anyway I'd like to share my 2Mhz.
IMO most posters forget one important fact. That is that noone knows how efficient a true quad core cpu is compared to a 2x dual core mcm solution.
Sure 30k is amazing and I also believe it was a trifire setup. But still it is not "impossible".
One must know the disadvantages AMD's K8 has over intel's Conroe. And then compare that to the K10. Most of Conroe's advantages are taken care of by the K10. One of K8's biggest problem is the "interchip bandwidth". The memory throughput is bottlenecked. That's why you don't see much efficient use of DDR2's bandwidth on K8's.
Almost every bottleneck has been taken care of. And remember we are talking about A Quad CPU Core, not a Quad Core CPU It's 1 Core with 4 CPU Units sharing 1 L3 Cache of 2MB. So who can tell what performance advantage you can get? It's probably not much in some benchmarks but it could make a difference in ohters. Also the memory controler seems to be clocked either higher or slower than the core it self.
That will also mean that the system memory could be able to run at default clock no matter the cpu clock. There are many more enhancements that could make the K10 perform well beyond expectations.
But to get the most of a K10 software must be recompiled. To make fully use of the FPU you don't have any choice anyway.
They must have had the gfx cards on liquid helium.
Well, 3 cards in Trifire isn't as impressinve at it sounded from the beginning is it now? I have no doubt that Barcelona is going to be good, but it isn't enought to save the R600 ;)
i am almost positive that this is tri crossfire. the demo system going around, the "other announcement", and the fact that face it, its mathematically impossible for k10 to be THAT good. it still seems that its beast of a cpu, just not enough to beat a 5ghz kentsfield into a bloody pulp...
Why would anyone in any kind of official position lie?
I'm actually now more interested in their implementation of more than 2 GPUs in Crossfire. Since Nvidia's Quad SLI was a bust. Maybe they have figured out a better way to distribute the load between the GPUs than Nvidia did.
So 23k for 2*Xt and 30K for trifire not to shabby after all! :D Sounds plausibule to me, 2*16 full speed Pci-e and all that. Know where is my Rd790 Board? (Just kidding I don´t hava a Rd790 board...yet)
VictorWang yesterday posted singlecard 2900XT on core2 @ almost 30k in '05. I highly doubt the truth of these 30k '06 benches, seeing that.
Victor's thread is here:
http://www.xtremesystems.org/forums/...d.php?t=156898
So, victor might be able to get 45-50k in 05 on that system, I do not doubt he has the skill to pull it off. The more i dig, the better this is looking for the consumer!
Assuming this is true:
http://www.tweaktown.com/news/8067/w...emo/index.html
6400+ with 2x2900XT on 790G gave 12000 3DMark06.
Only the CPU change won't give 18K, but can the third card, and OCing the 3 cards give 12K even?
Intel and AMD know; according to Intel the difference is negligible.AMD OTOH claims native is better , doesn't say by how much.
I expect the truth to be in between.
Let's look at the current X2s and Conroe.We can call the K8 X2 as the best MCM-like aproach; 2 cores which are connected by an outside link.
C2D uses a shared L2 , the most "native" aproach.
Time and time again K8 showed it was in no way bottlenecked by its memory sysytem.It only cares about low latency.Quote:
One must know the disadvantages AMD's K8 has over intel's Conroe. And then compare that to the K10. Most of Conroe's advantages are taken care of by the K10. One of K8's biggest problem is the "interchip bandwidth". The memory throughput is bottlenecked. That's why you don't see much efficient use of DDR2's bandwidth on K8's.
K10 major flaw is in its cache size ; 512KB is small and the L3 will be a bottleneck.Also its latency if true ( 38 ) is horrible compared with K8/C2D/Penryn ( 12-14 ) .Latest reports put Penryn's massive L2 at 13 cycle latency which is nothing short of astounding.Quote:
Almost every bottleneck has been taken care of. And remember we are talking about A Quad CPU Core, not a Quad Core CPU It's 1 Core with 4 CPU Units sharing 1 L3 Cache of 2MB. So who can tell what performance advantage you can get? It's probably not much in some benchmarks but it could make a difference in ohters. Also the memory controler seems to be clocked either higher or slower than the core it self.
That will also mean that the system memory could be able to run at default clock no matter the cpu clock. There are many more enhancements that could make the K10 perform well beyond expectations.
Umh , not exactly.The changes made will offer sizable benefits for existing code.Quote:
But to get the most of a K10 software must be recompiled. To make fully use of the FPU you don't have any choice anyway.
savantu,
What is AMD's L2 latency on K10
savantu,
i didn't knew intel and amd were using the same caches?:rolleyes:
^wow
goes to show what fanaticism does to people lol
the guy clearly stated: 1 x quad barcelona @ 3GHz + 2 x 2900 oced = 30K
wonder what ~4ghz does.. 4x4 @ ~3.5ghz :slobber:
I'm just happy that AMD are finally gonna release K10 very soon. My benchmarks are beginning to look like some from my AXP days....:ROTF:
Can't wait to get my hands on one :)
Intel's CSI sounds interesting......and regardless of mistakes made in the past...Intel don't seem to make many blunders these days.
Why not? I think it's a combination of both. How exactly does 3DM06 calculates to the cpu score? Does the L3 give the score an extra boost? Does the wider datapath give it an extra boost? Or is the redesigned memory controler more efficient with memory bandwidth? Perhaps it's the native quad core that works more efficient. Well I've never digged into 3DM06's calculations so I don't know.
My point is that there are so many unknown factors that it's not easy to say if it's "impossible" or not on this new platform with only 2 cards. As said before, Conroe is not much faster than the K8 in 3DM06.
So until we get a confirmation all we can do is speculate about it.
One thing is sure, AMD's new platform looks promising. Even if 3 Cards is needed for those scores, you bet it will be a success for AMD.
True or not, TriFire or not, it doesn't matter. Many are now even more interested than ever and all focussed on AMD. Penryn? Intel gave too much of it away already, there's nothing "new" about it anymore. In fact some are disappointed by it's performance. So even if Phenom turns out to be 5% faster, most will go for Phenom and the TriFire option.
I'm more interested in the performance between AMD's chipset and nVidia's.
On this new platform.
All I'm saying is that his "reports" haven't exactly panned out if you know what I mean ;) The INQ will post anything and they continue to do so. I don't even know why people even bother to respond to their "reports" like they have any validity. I'm sure we'll all find out that it was a system using 3 (or more) GPUs to achieve that score in 06.
Regardless if it's true or not. I wouldn't be surprised if we see a few benchmarks accidentally being seen before next week is up.
why do people always think oh no company in the world would keep such performance a secret they would make an intentional *leak* who knows maybe it broke 30k on 06 personally i think they mean 05 but why wouldn't a company like a complete surprise win? I would keep 30k in 06 a secret till release if i headed amd but then again, thats just my 2 cents
I agree that Penryn's memory sub-system is incredible, but lets have a bit more faith in AMD.
One of the greatest strengths of the Athlon/Hammer and now Barcelona, is that it has a large L1 cache.
The K10 improves upon the L1 cache in Hammer by making it two port.
Also, Shanghai (the 45nm successor for Barcelona) still has the same L2 cache size, while the L3 cache is expanded to 6MB.
Because of this, it seems that the AMD engineers didn't think the size of the L2 cache in K10 was going to limit it's performance, otherwise they would have addressed it in Shanghai.
Sure I can, if you don't post to me:) I, like most folks will wait for K10 results. I don't think is lovely, its being hidden. I don't think it's sweet when we're not given a taste of what it can do. Theo has been caught in tons of lies and so has Faud, Charlie and others at that site. What happened to taking anything they say with a grain of salt?
I honestly don't care about 3DBung-O-LiO marks because it doesn't correspond to anything on the Market=P I'd much rather seen SSE4 performance, maybe Crisis or Halo 3, maybe even DivX or XPeg results. Either of these processors could score 40,000 3DMarks and still get whacked in real-world games, so what in the hell did it prove?
Well, I'm sure having double the power would at max double the score. So if we had quad-core 6400+ and quad-fire cards it would give ~24K.
Phenom @ 3G will probably give more than a (theoretical ;) quad-core 6400+ would, but dual-card would have less points than quad-fire just as much, so we're back at square one.
Even if we consider a lot faster dual-card, scoring say 15K OCed, that leaves 15K for CPU difference - not possible. Or am I missing something?
I'm not sure I understood you. Are you thinking that just because you have 4 cores that you will somehow double your score in 06 vs a dual core system? I will be the first to tell you that, that's not how it works. A quad core will add some to your 06 score but it will not double it, not even close as a matter of fact. However adding more GPUs (assuming you had the CPU overhead to feed them) would give you a nice boost. I'm really interested in their multi GPU solution, as it would seem that they have a more efficient way of doing it that Nvidia did.
Possible scenario with 3-fire system(and mark06):
SM2.0:~13100(calculated 20% more points for 3rd card,OC is factored in)
SM3.0:~14500(same as above^^)
CPU:~6900@3Ghz (hypothetical 50% better clock/clock than Kentsfield in this particular test)
QFT!
But even for 3DMarks, it would have been impressive this or even 15K was performed in software mode by QCores with just one GPU. Old school testers used to perform tests in software mode to check the processor's power. Quake and UT were tested this way. Come on Sept 10th:D
I think they were referring more to server tasks than anything having to do with gaming. I don't think they are going to be all that much faster than the core 2 architecture in gaming and benchmarks. I can see 15-20% on a good day. And that will be a monumental triumph for AMD if they can put those kind of numbers up. Which happens to be roughly the same performance advantage that C2D is currently enjoying over AMD clock for clock.
It's a see saw, they will continue to out do each other, that's what they are supposed to do. But I really don't see AMD coming out with a chip that can push a set (2) of HD2900XTs to 30k in 06.
Not even server tasks, more overly memory bandwidth, embarrassingly parallel benchmarks like SPECint_rate and SPECfp_rate that have very little correlation with most enterprise applications. I personally expect Phenom to be slower clock-for-clock than Kentsfield on desktop applications.
it's a shot in the dar, k0nsl.
Ryan
Right,ok. lol
i can believe it, but yet as well it is fishy, only time will tell but either way,
i'm getting one of these bad boys :D
I thought Tri-fire was 2xGFX, 1xPHYSICS?
Me too if they rock.:up: What I will NOT do is just buy because it is not Intel LOL! Just as I bought AMD when Hot Prescott sucked IMHO! I'm not using 3DMarks as a measurement of which is faster. I'll pick apps just as I did when I chose A64 over P4 because it was faster or C2D over X2 for the same reason. Not some silly Fanboy MISPLACED love for either company:up:
:rofl:Quote:
Originally Posted by Informal