I had read the entire blog.
It is more about video rendering and very packed codes feeding the modules in order to show full strength of Bulldozer. Normal apps wouldn't see such benefits from scheduler re-writing.
Printable View
He is babbling something about PCI-X, which has next to nothing to do with video rendering, so it could be generic. But I'm afraid it's just like the other stories on the right side of the page...
Actually, they will see, but it has nothing to do with PCI-X... :)Quote:
Normal apps wouldn't see such benefits from scheduler re-writing.
I think you're referring to the MadShrimps review, excellent testing of different timings, dividers, and NB speeds.
I'm still going to buy the 8150, too much overclocking fun potential :)
My source said they will have FX-6100 and FX-8150 in local store tomorrow
They will have only very few in stock and the price
$2280 HKD for FX-8150 ( ~ $293 USD )
$1690 HKD for FX-6100 ( ~ $217 USD )
I think it's too expensive , at such price I can get better performance and deal with Core i5-2500K / Core i7-2600 :(
Someone raised HTT and it had a big impact on Cinebench, but I can't remember where, does anyone know?
Too many threads and reviews all over the place, but I think it was here at XS.
Edit: Found it.
Also, is there any comparison review of different board brands like Asus and Asrock? Is it really a big performance difference between them?
it was sin0822
http://img840.imageshack.us/img840/7...tihttscale.jpg
found here:
http://www.xtremesystems.org/forums/...Results-coming!
Thanks Manicdan.
If it has such an impact on benchmarks, I wonder why AMD kept it at 200 MHz for Zambezi, I guess they had their chance to change it with 990FX.
Weel neither does DGLee: http://www.xtremesystems.org/forums/...=1#post4974690
chew* thinks it depends on board model, sin0822 uses a Gigabyte.
That seems to indicate that there's a clock domain mismatch somewhere, and using 250MHz HTT fixes the mismatch for that particular combination of clocks/multipliers. (But there's no guarantee it wouldn't create a new mismatch somewhere else, e.g. at stock speeds.)
there is just so much to play with from AMD, im having a really hard time not wanting to buy one, even though perf is not impressive.
i think i need a girlfriend to keep me busy...
Manicdan: If you are considering something serious like getting a girlfriend, I suggest you buy a BD instead. ;)
thanks, but i guess i'll wait for the 4170 & the Fix
AMD Bulldozer, can it get even worse?
http://scalibq.wordpress.com/2011/10...et-even-worse/
in the meantime i'll grab a 960T :p:
That BSOD is common on i7 systems when you don't have enough vcore. I wonder if AMD's problem is not enough vcore stock or some sort of other bug with software or something. If there is some sort of software workaround, it'd be interesting to see if it would help other processors who have the same BSODs.
I have yet to see this on mine. Running windows 8 and windows 7. I have done a lot of test and also have it running WCG when I am not benching. I also have CnQ turned off as well as other power save setting BIos and windows. Be intersting to see if it is a result of those.
another Fusion?
AMD hires Mark Papermaster as CTO
http://www.theinquirer.net/inquirer/...apermaster-cto
BD will be cheaper.
I'm sorry if this might sound :am: as I do not have that much understanding of CPU design/operation.
I went back and looked at the picture of the BD core and how it works on 1 and 2 threads.
1 thread will have all of the resources of the module ...... 100% of a traditional x86 core
2 threads will have some shared resources between the (2) cores in the module ........... up to 80% scaling of a traditional x86 core
We have all read about this and have even made comments that this would be like ~~180%~~ of 2 traditional x86 cores.
But here is my question and see if this makes sense ~~ If (2) threads are being worked on in a module and as neither (core) has full use of all the resources, this would bring each module down to a level of only ~~150%-160%~~ of (2)Fully working (like in the 1100T) traditional x86 cores.
As the resources in each module have to be shared would this not work out to 80%+80%=160%??????????
Thank you
For Your Time
As I recall it, CMT is billed as a module providing 80% of two full cores per module instead of 80% per core per module with the intent being to mean 180% of the performance of a single core performance per module before software scaling limitations are thrown in. This is done at an expense of a 12% die space increase per module iirc resulting in a 5% overall larger die.
That's just a half joke of mine, hehehe. :D
But the other half is quite serious, since i still think to this moment that the CMT of BD is -like what i've said before- quite a novel and wonderful idea of die size vs theoritical performance scaling trade-off. :)
One fvcked up execution of an idea doesn't mean the idea itself is bad, you know. ;)
If these tests have already been posted let me know and I can deleted them? There are so many threads of the BD I just haven’t gone through them all.
I ran a few tests on the FX4100 using two cores one module and then two modules one core each. The bios in this AsRock MB uses the terminology “cores” and “units” so I guess the units are modules? The idea was to test a total of two threads on either one or two modules/units. For all I know maybe this isn’t the proper way to test this but at the moment I can’t think of a better way.:shrug:
The tests weren’t done for comparison of the FX4100 to another processor be it AMD or Intel but only to find the difference if any based on what I stated above.
The down clocked processor test speed of 3.2GHz is just a setting I was using for another set of tests nothing else. The test speed setting not shown in the SS’s is the Northbridge which was 2600.
The OS is a lite version of Windows XP 32-bit and it’s pretty much beat up from past testing but it should be ok for these tests.
SPi 1M and pifast are only there to show that there is no real difference in those single thread tests but as seen in the SS’s, the multi thread tests show a difference.
This test is of one module/unit two cores testing two threads in HyperPi 1M and wPrime 32/1024.
http://i262.photobucket.com/albums/i...cores-XP-s.jpg
This test is of two modules/units using one core each testing two threads in HyperPi 1M and wPrime 32/1024.
http://i262.photobucket.com/albums/i...-unit-XP-s.jpg
the only test i haven't see is AID64
try running AID64 on them to see any write differences on the caches. when running one "unit" each compared to a "two cores" in a unit.
also try Intelburntest set to 4 threads vs 2 threads.
run it normal with 4 threads then set 2 threads to see if it's lower. might want to try setting affinity with AOD/taskmanager on that or something.
then run it one two separate "units"
the on one "unit" for both cores.