See you on Wednesday.
Printable View
If those results are real and the tested unit is a final one with latest BIOS, MB and drivers... then the results are pretty dissapointing.
BD runs very hot, it's much slower than a 2600K and its mono-core performance is very poor ( 20s in SuperPI? omg... worse than a Core2! ) :(:(:(:(
In fact, I have doubts even comparing it with a Phenom II X6 ... because it will perform more or less than that but it will be much more expensive.
This thread has an insane number of views in less than 2 days...
so this thread will be deleted come wednesday ;)
well thats just great, i was cleaning my comp and i had to reapply thermal paste, and i dropped my 1090t, now all the pins are bent....s.o.b!
jeez good thing i didn't become a surgeon like my parents wanted.
I'd rather have bent pins on the bottom of a CPU than bent pins in a CPU socket. Of course, I wouldn't want either to happen, but the former is much easier to recover from than the latter.
Took me some effort to find it ... then I got smart and looked in my history.. and there it was:
http://www.gossamer-threads.com/list...readed#1409305
There was more somewhere else, I'll post it when/if I find it...
--The loon
That errata should be identical on all Bulldozer-based CPUs. It is a per-module issue where instructions get pushed out incorrectly and so the core needs to re-fetch instructions from the L2 (if present[mostly inclusive, so it SHOULD be]) with an ~18 cycle penalty for each occurrence.
In heavy single threaded loads, with the other core intermittently active, you'll see the heavy thread being penalized excessively with the instruction fetches causing improper invalidations of cache lines... up to about 10% or so with normal usage (my estimate based on the architecture details and the patch's code).
It gets REALLY interesting when the entire module is loaded. Now, you have BOTH core causing fetches which cross-invalidate repeatedly and the L1 is essentially bypassed as both cores keep going back to the L2 for their instructions.
I expected this issue the very first instant I saw a shared L1I cache setup... but I'm a long-time coder in heavily multi-threaded environments so those types of issues stand out for me.
I even wrote a program to test my theory (yeah... I'm a geek), so my numbers aren't just wild guesses, though they are case-specific.
Or I'm just completely missing something... ( we'll know in just a couple days ).
--The loon
Thanks for the link, I never saw the reply :-)
That said, however, I'd be willing to bet that my numbers have merit in some loads (likely these "microbenchmarks) mentioned). If the contention is as it was described, the performance effect should be greater - unless it is happening in more rare circumstances than I had anticipated (which is certainly possible/likely given 64KB of cache lines...).
(I'd have been even more conservative if I'd known exactly how widely my comment would be parroted!)
--The loon
Yes, my numbers are accurate... except they only apply for about 20% of the time... or less... the problem apparently irons itself out under sustained load, which is not what I was anticipating when I did my simulations ( yes, simulations - based of scant data, but still... I'm a dork :clap: ).
It was something I had entirely dismissed as being likely due to the nature of the patch - changing the memory mapping, going-semi static (security risk) and much more was being done in the patch as I saw it. Not sure what the final patch looks like, just found out about all this parroting of my post (google it - it's freaking CRAZY!) on other forums and came back here to do damage control...
--The loon
Hi guys,
I've been on the road the last couple of days, so didn't have much time to read or to reply in this crazy thread. Just wanted to say that we do not build the hardware, we just test it. We did it countless times before (Clarkdale, Sandy Bridge, GTX480) and we will most likely do it again. Our interest is to keep our readers well informed, so we try as much as possible to be very accurate. Until now, I really do not remember being wrong, even though in other threads some people also said that these are bogus tests and so on. Of course, I also do not remember any of those guys saying "you are right" after the final reviews came up, but that is a different story :)
We have been playing with hardware and testing hardware a long time now, and we will be doing this for a long while, so it would really not be in our best interest to put out wrong tests or anything like this. After all, there is one more day until all the reviews are out, so anybody can compare all the results we got with all the other results from the web and see if we were right or not. I personally am looking forward to that :)
With this preview, like with all the others, we tried to double-check every little thing, to get the last bios, the latest silicone version, the latest software updates and so on. Also, we could not make a very big preview with many game resolutions, many applications and so on, because we were very, very busy with MOA, so we tried to choose the best scenarios to put accent on the CPU, not on the VGA or anything else. Even so, for a preview, I would say we got enough results, and I am sure that the reviews coming tomorrow will have more, and more results to show exactly how Bulldozer is working.
In the end, remember that competition is the base of progress and evolution, and it is very important for all of us to see good products on the market, so our job as enthusiasts and press is just to show things how they are, in order to help the companies improve their products. It does not matter if it is Intel Prescott, Nvidia GTX590 or AMD Bulldozer. When something should have been better, it is our job to say that so that future products will be better. As a hardware enthusiast I care the most about performance and good products, not about labels and marketing, and as hardware press, I care about correctly informing our readers, not about "shocking" stories that would not be true.
I hope this sheds some light on all the things discused so thorougly in this thread, and also on our position and intentions.
Thanks for that Monstru. :)
wait tomorrow
If these results are true, we are at the mercy of Intel’s prices for years to come :down:
You can realise that we have sources for CPUs and VGA, like most people who want to get hardware in advance. You can also realise that it is useless to ask, nobody will ever divulge souch sources :)Quote:
How did you get the chip?
For heavy GPU benchmarks we used Unigine, which scales with CPU much less then Crysis and Metro (eg none, if there is actually nothing wrong with a CPU). HawX and RE5 are perfect to show the difference between two CPU's at low res, instead of just testing mainly the GPU / PCI-E controller. It is a theoretical game performance test, not a real life one, obviously...Quote:
Also, why HAWX and not Crysis or Metro 2033?
In the end, it does not matter why we chose that instead of the other (we would have tested twice as many games and apps if we would have had the time, and more resolutions and so on, but that would be a review, not preview...), you will definately have tons of reviews with tons games tested at tons of resolutions tomorrow. We just previewed a small batch of games and tests in order to have a basic ideea of how this CPU performs. In order to understand the full picture, read the reviews tomorrow, a lot of guys are working very hard to show you as many situations you would like to see the new CPU in as possible...
hey don't insult prescott, i have a WR on prescott, or better presshott.
Anyone hear about a kernel in the windows OS needs to be updated, as it is heavily impacting performance, or something that, so i am going to try windows 8.
There was a slide about win 8 performance, it was 10% in best case. Most likely it's due to offloading lightly threaded application to single module -> turn off other modules -> increase Turbo Core clock speeds. Win 7 wouldn't know how to do that with BD modules, unless it gets patched. But that's not a BD only benefit, SB can benefit from the same scheduling improvement.
just one question as a regular reader:
WHAT IS THE POINT IN SHOWING NON-REAL WORLD PERFORMANCE?
i don't by some magic numbers from some stupid synthetic tests, i want real world numbers in real world tests (application tests in wide spread settings) - almost all hardware sites are completely useless because they don't put their focus on real world performance and user experience and yours is one of them - you show a completely wrong picture of what users can expect from the reviewed hardware by doing such completely useless junk tests