Yes
No
Don't know
macci already knows the truth, maybe he can say yes/no![]()
Favourite game: 3DMark
Work: Muropaketti.com - Finnish hardware site
Views and opinions about IT industry: Twitter: sampsa_kurri
Maybe soon we'll need 2 cpu and 3 gpu pots all filled with LN2![]()
![]()
Last edited by kiwi; 08-29-2007 at 01:36 AM.
...
I want to see the screen![]()
Battlefield 3: Nachthymnen666
Na-uh, hypothetical example:
I have a 3000+ that cold bugs at 300mhz HT @ -100c
However, I can run @ 299MHz HT @ -100c with a 250/200 memory divider.
I've seen a couple CPUs where the memory frequency is greater than the cold bug frequency, which makes me think that it's not the memory controller.
Ok feel free to prove me wrong.
Xtreme SUPERCOMPUTER
Nov 1 - Nov 8 Join Now!
Athlon64 3700+ KACAE 0605APAW @ 3455MHz 314x11 1.92v/Vapochill || Core 2 Duo E8500 Q807 @ 6060MHz 638x9.5 1.95v LN2 @ -120'c || Athlon64 FX-55 CABCE 0516WPMW @ 3916MHz 261x15 1.802v/LN2 @ -40c || DFI LP UT CFX3200-DR || DFI LP UT NF4 SLI-DR || DFI LP UT NF4 Ultra D || Sapphire X1950XT || 2x256MB Kingston HyperX BH-5 @ 290MHz 2-2-2-5 3.94v || 2x256MB G.Skill TCCD @ 350MHz 3-4-4-8 3.1v || 2x256MB Kingston HyperX BH-5 @ 294MHz 2-2-2-5 3.94v
lol the thread question is very different than the pool ones...
Anyway, i dont know if they archieved 30k, and i think 98% of people also
Well... seems like people at AMD have what they wanted: people talking about them.
K10 should outperform C2D, but I don't know if it will. First of all, should eradicate the coldbug.
k10 should outperform c2d yes.... but.. intel will prolly just pull some stunt and release nehalem faster than originally planned if k10 performs unexpected good....
as for the 30k in 3dmark06....
i dont think it will happen with a k10@3ghz....
it took shamino and kinc + c2d@5,11ghz to get 27k....
that would make it more than a 50% clock per clock performance difference from c2d to k10.... and i dont think that will happen....
Enough crap. Let's show the official spokespeople.
AMD's Quads to outperform Intel's by over 40%
http://www.youtube.com/watch?v=G_n3wvsfq4Y
3GHZ demo is TRIFIRE:
http://www.youtube.com/watch?v=R7EZmYth6TM
EDIT TO FIX FIRST LINK
Last edited by cadaveca; 08-29-2007 at 07:31 AM.
I dont see any mention in the second link regarding 3DMark'06 or 30k
Intel Core I7 920 @ 3.8GHZ 1.28V (Core Contact Freezer)
Asus X58 P6T
6GB OCZ Gold DDR3-1600MHZ 8-8-8-24
XFX HD5870
WD 1TB Black HD
Corsair 850TX
Cooler Master HAF 922
Drivers are always optimized for benching as of late, it seems. I've got a few 2900XT systems, XP, Vista, 32-bit and 64, and in most scenarios, having a second card merely hampers performance(except in benches). We don't see this in the popular benches as the driver team quite obviously has spent a considerable AMT of time making these apps work very well.
Yes, i do, and see above. Part of the problem with quadSLi, IMHO, was the platform it was on, as well as some really intense driver issues. Vista allows this to be overcome, but seemingly each app must be coded for within the driver for multi-gpu rendering to take full advantage of the gpu power. Seems purely the fault of M$, but of course, you need some good programmers for your drivers too...
With that in mind, and knowing how much power these cards draw(or how much they aren't sucking up, and how far they really are from the max they CAN draw), I truly feel that there's more to these cards, and the 8800's than meets the eye. I do reemmber something a long time ago about both R600 and g80 liking about 150mhz pci-e, and I'm sure if you talk to the "TOP benchers", they'll confirm that these cards behave as I imply.
Now, I think, really, that if 2900's can reach 30k, than so can 8800's...i look at this situation as a platform presentation, not just cpu, video, or whatever... mere 15% gains in each area(cpu, video, mobo) make for some really signifigant gains overall.
Last edited by cadaveca; 08-29-2007 at 08:42 AM.
no way...
Core Quad Q9300
ASUS P5Q-PRO
4GB RAM
Ati Radeon 4890
Enermax Liberty 620w
Not impossible, just very hard to believe (very, very, very, very, very, very, hard).
//Andreas
You missed the latest and more proper one.
http://www.techarp.com/showarticle.a...tno=434&pgno=0
I love this part
"The pics are gone with my stolen laptop, though."
![]()
R7 1700 | ASRock X370 Gaming K4 | 16 GB G.Skill Flare X | Corsair AX 750W | NH-D15 | EVO 960 | Fractal Design Define C
It's most likely true.
I remember fanboy also trolling about 8800 and it's gigantic leap in performance..
at the end their buts got nailed but the impressive results...![]()
I say no, even if k10 is 40% faster than kentsfield, it would need to be at at least 3ghz just to keep pace with shamino's 27k record with the 5.1ghz kentsfield. For amd to hit 30k, they would have to either make it scale even better, or hit higher clockspeeds, which I do not feel is possible with their soi process, imo they're very lucky to hit 3ghz in the first place. And all of this is assuming k10 is 40% faster clock:clock than kentsfield, that may be false and mean even higher clockspeeds
but theinquirer has never been a complete amd fan site like amdzone, and the fact that no reputable site like vr-zone or someone like coolaler hasn't posted the same info makes it hard to believe. And don't forget, there was tons of hype for the r600 which none of which was true.
And all of that is just logitech, take a look at my previous post and think, is it really possible? Not to mention, when he oc'd the cpu from 2.5ghz to 3ghz, he got a 26% performance increase while the clockspeed increased by only 20%. Unless you can prove to me that k10 scales better than 100% in 3dmark06, I'm calling bs.
perhaps he actually was robbed if this article is true, but last time I checked only shamino and kinc were robbed
http://www.theinquirer.net/?article=41909
Bookmarks