PDA

View Full Version : amd dual core preview



pfoot
03-12-2005, 09:15 AM
Got the link from [H], looks like a single benchmark of the Toledo.


Looks Good, 2 X 2.4 G, socket 939, 1.4V, Thermaltake heatsink, nothing special. Damn near 2 X the performance!!

Tranlated link:
http://translate.google.com/translate?u=http%3A%2F%2Fwww.hwupgrade.it%2Fartico li%2F1193%2Findex.html&langpair=it%7Cen&hl=en&ie=UTF-8&oe=UTF-8&prev=%2Flanguage_tools


pfoot

agenda2005
03-12-2005, 09:32 AM
Nice scores, but 512KB per core is not looking like what AMD promised. I hope we will have 1MB per core (2MB total). A promising performance. Thanks for this info pfoot.

pfoot
03-12-2005, 09:48 AM
If you look at the Cpuz screenshot it shows 1024KB for CPU #1, that would mean cpu #2 also has 1024KB.

Maybe someone can translate this better than Google:

"On board is integrated two cache L2 distinguished, ciascuna in quantitative par to 1 Mbyte"

pfoot

blueworm
03-12-2005, 10:13 AM
Is it still due out in Q3?

ozzimark
03-12-2005, 10:20 AM
If you look at the Cpuz screenshot it shows 1024KB for CPU #1, that would mean cpu #2 also has 1024KB.

Maybe someone can translate this better than Google:

"On board is integrated two cache L2 distinguished, ciascuna in quantitative par to 1 Mbyte"

pfoot
yeah, i was on the assumption it said two 1mb l2's.
the fact that the screen shows 1mb for cpu #1 settles it for me.

Sentential
03-12-2005, 10:32 AM
Looks really good. I just hope they put the new tech to good use :toast:

agenda2005
03-12-2005, 10:33 AM
I think the two CPU share the same L2 cache, 512KB + 512KB = 1024KB. Correct me if I'm wrong. Does it mean each CPU cannot see the L2 cache of the other or was it CPU-Z that read it as been seperate?

MikeMurphy
03-12-2005, 10:39 AM
Wow, I thought the dual core CPUs would require a unique mobo?!

nebuchanezzar
03-12-2005, 10:40 AM
I dont know what ciascuna means but the rest means quantity equal to 1M. More info and testing will answer those questions but my guess is shared 1M not 1M each. Hopefully I'm wrong.

Jrocket
03-12-2005, 10:49 AM
I think the two CPU share the same L2 cache

It has always been my impression that each core has its own bank of cache that is specifically allocated to it. It is very possible that there will be an fx dual core with 1mb per core while the non-fx dual cores will have 512kb per core.

Jrocket
03-12-2005, 10:55 AM
Wow, I thought the dual core CPUs would require a unique mobo?!

You just made me realize that they did say s939 didnt they!?!?
Yeah, it was originally stated that dual core processors would require more pins to operate. However, this statement was made regarding Intel. Intel has to redesign their dual cores for a different socket because they are having to completely redesign the processor because Pentium 4 was not designed to be used with two cores. On the other hand, AMD created the athlon 64 with using two cores in mind. From the beginning the Athlon 64 has been designed to utilize the dual core, and the reason for the s754 to s939 socket change was most likely to accomodate the upcoming dual core processors.

IvanAndreevich
03-12-2005, 11:08 AM
This is K8, not intel shared L2 crap. Each core should have it's own L2 which should be accessible by other CPU through the HT bus.

agenda2005
03-12-2005, 11:08 AM
It has always been my impression that each core has its own bank of cache that is specifically allocated to it. It is very possible that there will be an fx dual core with 1mb per core while the non-fx dual cores will have 512kb per core.

You are right! However, that type of design will require a sophisticated memory controller which need to schedule data transfer between each CPU L2 and the correct bank, therefore a 1 or more cycle delay. They will also require a crossbar switch to access data from each others L2.
Whereas a glueless design with shared L2 makes the work of the memory controller easier and no need for a crossbar. This is easier and more efficient, since A64 CPUs have an exclsive L2 cache.

IvanAndreevich
03-12-2005, 11:16 AM
agenda2005
You know this for sure? My memory must have faulted me then.

agenda2005
03-12-2005, 11:18 AM
agenda2005
You know this for sure? My memory must have faulted me then.

What does that suppose to me?

RAndomaN
03-12-2005, 11:27 AM
nice! imagine what the FX is gonna be like :D

PetNorth
03-12-2005, 11:35 AM
If you look at the Cpuz screenshot it shows 1024KB for CPU #1, that would mean cpu #2 also has 1024KB.

Maybe someone can translate this better than Google:

"On board is integrated two cache L2 distinguished, ciascuna in quantitative par to 1 Mbyte"

pfoot

ciascuna (ciascuno, really) means "each" in english ;)

1MB per core = 2MB total L2 ;)

Jrocket
03-12-2005, 11:35 AM
You are right! However, that type of design will require a sophisticated memory controller which need to schedule data transfer between each CPU L2 and the correct bank, therefore a 1 or more cycle delay. They will also require a crossbar switch to access data from each others L2.
Whereas a glueless design with shared L2 makes the work of the memory controller easier and no need for a crossbar. This is easier and more efficient, since A64 CPUs have an exclsive L2 cache.

I see what you mean, but if they were to share the cache then we would most likely not see as high of speed because we would only have one inlet to the processor instead of two seperate banks pulling from the hyper transport link. You would run into one core waiting on the other. I think the plan is to take the information from the hyper transport link split it into two even groups to be processed by the two seperate banks and cores in order decrease traffic. As far as the crossover bar is concerned, you wouldnt necessarily have to use it. Using it would only allow either core to correct a recent mistake, and this is not something that needs to be switched to the other core. Instead you could keep the raw data seperately allocated and merge the two afterwords, much like SLI.

Geforce4ti4200
03-12-2005, 02:40 PM
so does the dual core cpu physically have 1mb or 2mb? amds roadmap says 2mb

IvanAndreevich
03-12-2005, 02:55 PM
1MB per core AFAIK

saaya
03-25-2005, 04:04 AM
http://translate.google.com/translate?u=http%3A%2F%2Fwww.hwupgrade.it%2Fartico li%2F1193%2Findex.html&langpair=it%7Cen&hl=en&ie=UTF-8&oe=UTF-8&prev=%2Flanguage_tools

wow! nice :D

Alec
03-25-2005, 04:18 AM
woah! Looks awsome!
Any idea of when these things are due to be released?

Alec

Jamo
03-25-2005, 04:20 AM
not to rain on your parade or anything saaya, but this is old news http://www.xtremesystems.org/forums/showthread.php?t=55889

saaya
03-25-2005, 04:28 AM
ah, sorry i must have missed it :D

will merge the threads :)

saaya
03-25-2005, 06:20 AM
the results are sweet! the 250 runs at 2.4ghz as well, right?
so the dual core 2.4ghz is even faster than the dual opteron rig? :eek:

now its important to know if the dual opteron had numa or not though, but i guess not.

saaya
03-25-2005, 07:36 AM
Hi Sascha

for the Dual Opteron I've used a Abit SU-2S motherboard; NUMA was disabled, with ECC Registered memory (ECC off during benchmarking)

no numa then... hmmm so multiple cpus with numa is still faster than a dual core cpu... but not very much.

so from this we can conclude that the a64 architecture scales very nicely with more bandwidth, wich all tweakers know already :D

perkam
03-25-2005, 08:35 AM
Where are the benchmarks though? :(

Perkam

saaya
03-25-2005, 09:02 AM
?cant you see them?

he only ran cinebench from what i can see...

prc8
03-25-2005, 05:13 PM
For gaming i dont think we will see a big improvement.

we need to wait and see.

saaya
03-25-2005, 07:50 PM
well unreal3 and the game engine that splinter cell3 is based on claim to have multi threadding support, but we will see how much of an impact this will have...

rather minimal i think as well...

amd told me that this is exactly why they will launch the dual core chips for the server market mainly, and the desktop dual core chips will only be an addition to the current lineup and will not replace the singe core cpus, like intel plans to do... intel plans to replace all their higher end single core cpus with dual core cpus... we will see if it works out as their dual core cpus come clocked quite a bit lower...

but intel is pushing 65nm really hard, and we might actually see their new 65nm dual core chip codenamed pressler in q3 of this year! so dual core 90nm amd cpus might have to compete with 65nm dual core pressler cpus... wich will def be a much more interesting match than 90nm dual core a64s versus 90nm dual core prescotts :D

Bloody_Sorcerer
03-25-2005, 08:24 PM
You just made me realize that they did say s939 didnt they!?!?
Yeah, it was originally stated that dual core processors would require more pins to operate. However, this statement was made regarding Intel. Intel has to redesign their dual cores for a different socket because they are having to completely redesign the processor because Pentium 4 was not designed to be used with two cores. On the other hand, AMD created the athlon 64 with using two cores in mind. From the beginning the Athlon 64 has been designed to utilize the dual core, and the reason for the s754 to s939 socket change was most likely to accomodate the upcoming dual core processors.
No, the reason for the move from s754 to s939 was "dual channel". nothing to do with dual core. Dual core just so happens to conveniently work on the same socket because each processor (single core, 2 cores, whatever) gets 3 hypertransport links to use as it wishes.

VVJ
03-26-2005, 05:02 AM
For all: I have a datasheet on Dual Core Athlon 64 processor and its official name is "AMD Athlon 64 X2 Dual Core Processor".

saaya
03-26-2005, 07:25 AM
X2? or x² ? :D

dX.
03-26-2005, 02:53 PM
For all: I have a datasheet on Dual Core Athlon 64 processor and its official name is "AMD Athlon 64 X2 Dual Core Processor".
Atleast they are using a name that actually relates to it being a Dual core processor (X2 that is). No stupid "AMD Athlon 64 Hyper Extreme++" names. :)

EMC2
03-26-2005, 04:29 PM
FYI... architecture:

http://img8.exs.cx/img8/6210/dualcore7nc.png (http://www.imageshack.us)


Since the AMD64 platform was first discussed publicly in 1999, we have indicated on multiple occasions that AMD64 was designed from the ground up to be optimized for multiple cores.

End users can upgrade their existing systems that are compatible with 90nm single core processors to dual core processors

Direct Connect Architecture
– Addresses and helps reduce the real challenges and bottlenecks of system
architecture because everything is directly connected to the CPU
– Directly connects the two processor cores on to a single die to even further
reduce latencies between processors

The 2 CPU cores share the same memory
and HyperTransport™ technology resources found in single core AMD processors
– Integrated memory controller & HyperTransport links route out the same
as today’s implementation Memory Controller HT0 HT1 HT2

STEvil
03-26-2005, 05:12 PM
Where did you get that image EMC2? It has also been stated that one of the HT links is used to link the cores IIRC..

Xerxes
03-26-2005, 08:14 PM
aye stevil that is correct at least for 2+ way systems, not sure about between each core on a dual core though as i havent seen anything about it that i can remember anyways :)

VVJ
03-27-2005, 12:38 AM
Atleast they are using a name that actually relates to it being a Dual core processor (X2 that is). No stupid "AMD Athlon 64 Hyper Extreme++" names. :)
dX, yeah! I absolutely agree with you! :toast:

karelke
03-27-2005, 02:00 AM
For all: I have a datasheet on Dual Core Athlon 64 processor and its official name is "AMD Athlon 64 X2 Dual Core Processor".

Can we see it?

EMC2
03-27-2005, 07:25 AM
--- Stevil ---

I started to say "found it under a green and white rabbit in a hat"... look at the color scheme on the pic ;)

Oh, and the HT link is only used for connection between two physically seperate CPUs, whether single or dual core... read the 3rd quote again closely.

Peace bro

VVJ
03-27-2005, 09:09 PM
karelke have you signed a non-disclosure agreement? If yes, please read it again more attentively!

saratoga
03-27-2005, 09:41 PM
Where did you get that image EMC2? It has also been stated that one of the HT links is used to link the cores IIRC..

I explained this a while ago in the dual core thread. Dual core hammer doesn't use HT for intercore communication, just external. Everything internal is through the SRQ, which can be routed to either the memory controller or the HT links if needed. Thats why S939 dual core chips can have only one HT link for 2 cores (IIRC, could be wrong on that detail), and why they work in existing motherboards.

STEvil
03-28-2005, 12:16 AM
read the 3rd quote again closely
Sounds an awefull lot like one of the HT links is being used.


I explained this a while ago in the dual core thread. Dual core hammer doesn't use HT for intercore communication, just external. Everything internal is through the SRQ, which can be routed to either the memory controller or the HT links if needed. Thats why S939 dual core chips can have only one HT link for 2 cores (IIRC, could be wrong on that detail), and why they work in existing motherboards.
Two processors combined have 6 HT links total, utilizing one each to connect to each other leaves 4. Those two are used for system (one each?) and memory access (one each?). Now I am not exactly sure on the details, but knowing each physically seperate CPU (ie: different sockets) must have a system and memory link this is how it logically plays out.

Moving two cores onto one die doesnt change the rules that the cores need to talk to each other, and the HT bus is there to do it. What the 4 extra HT links are utilized for (or are they pooled so one cpu can access sytem while another can access more ram?) has not been told to us as far as I know and since the cores can talk to each other the pooling of the HT links to allow one cpu to utilize double the HT links at any given time (where appropriate) does seem the logical way to do things..

Although doing things logically or efficiently isnt in the vocabulary of some.

EMC2
03-28-2005, 06:43 AM
--- Stevil ---

The key I was trying to draw your attention to in the 3rd quote was, "Directly connects the two processor cores on to a single die to even further reduce latencies between processors"... latency is reduced because they are no longer communicating with each other thru HTT links ;) In the dual-core the HTT links are not used to connected the two cores. The HTT links & the memory controller/interface are shared by the two cores via the crossbar switch, and are not part of the cores "proper".

Another way to look at it is the mem controller and the HTT links are an embedded NB, with the system request interface and crossbar switch the connection between the core(s) and this "internal NB". Also note that thru the use of crossbar technology in the interface, it will be possible for one core to access memory while the other is communicating via an HTT link to the rest of the system or another physical CPU.

Regarding crossbar switches... they allow the direct connection between all devices connected and multiple connections to be concurrent. Example: Core1<->Mem, Core2<->HTT0, HTT1<->HTT2, all at once... or any other combination.

ooops... time for work, later bro :toast:

saaya
03-28-2005, 12:40 PM
thx for the infos! :) very interesting... :D

one thing i didnt get though is what the 3 ht links are supposed to be for. on the image it looks as if the memory controller is independant of the 3 ht links, so theres the memory controller AND 3 ht links?
then what are the 3 ht links used for? is this drawing showing an 8xx dual core cpu?
because 1 cpu only needs on ht link to connect the cpu to the system.
2 cpus use 1 to connect to the system and one to connect to the other cpu, right?
and 4 or 8 cpus use the third ht link to connect to yet another cpu, building up a ht link network between the cpus so to say.

please correct me if im wrong :D

so in an 8way system each cpu uses 2 ht links to connect itself to the other cpus, and one to connect itself to the system? but does this mean we have 8 cpus hooked up to 1 chipset on an 8 way system? or 2 chipsets or even more?

does anybody know how it works on the nf4 pro chipset?
1 master chipset (2200) hooked up to one cpu and then 1 slave chipset (2050) hooked up to the other cpus? are the master and slave nf4 chipsets connected via the cpus ht links or are they hooked up with their own ht links somehow? or pciE links maybe? :confused:

cadaveca
03-28-2005, 01:20 PM
only the opteron level of cpu will feature 3 active ht links. currently, the a64 has one link active, the FX two, and the opteron 3. one of the three available is used to interconnect cpu's in the opteron line. this is needed becasue they are not in the same package.

the SRI and crossbar manage the communications between the processors. in dualcores.

I assume the 3 HT links are need to be able to handle the bandwidth demand of a dualcore...2 cores means twice the processing power, and double the info. only 2 links would only be equal to the FX line now, so three is needed to see the real performance boost.

As far as i know, there can be up to a max of 16 or 24 HTT lanes in current standards, but i would have to check the pdf again.

saaya
03-28-2005, 01:40 PM
well the direct connection between cpus is certainly not NEEDED, they could also be connected via the chipset, wich would be much much slower, but yeah i get your point :D

and are you sure the fx has 2 active links? for what? :confused:

i think your mixing things up there, current single core opterons also have 3 ht links, just like the to be anounced dual core opterons, at least afaik.

cadaveca
03-28-2005, 01:46 PM
that's what i am saying...currently they have these links. this will not change with the dualcores, as they will fit in the same sockets...on the same boards. the way it's implemented, and the similarities to what we have currently, is highly intentional. that's why we have the crossbar controller, as doing it any other way WOULD require a new pinout...like the M2 socket, or whatever it's called, ot the 1066pin amd socket(dunno 'bout that one. may have just been an early revision of these cores or something).

cadaveca
03-28-2005, 01:55 PM
the a64 has one htt lane, which is bi-directional. 16bit, 2-way.

the FX have 2 dedicated lanes 16-bit one-way, although from what i understand, thay can be two-way, but i don't know for sure.

With the FX having dedicated lanes, this increases the overall possible bandwidth of the chip by huge ammounts...rather than 22.8gb switched bi-directional, it's 45.6gb possible. These seem like large numers, but the chipset has to support a multipler to get to the upper reaches...2000mhz on nforce4 is = to what, 20.0gb? easier for the FX to utilize the most of this bandwidth, because of the 2 lanes.

The opteron uses the third lane to connect to the second cpu, and only differs from the fx in this way. managing all the communcations explain the slight drop in performance from FX to opteron.

anyways, even the FX and opteron, limited by chipset as they are, are not using most of this bandwidth. so it makes sense to just drop in another core to make use of all that extra, and even though the crossbar is there, adding latency, keeping the 2 active lanes means that the small amt of latency should not matter, if you go to two lanes bi-directional, OR dedicated.

saaya
03-28-2005, 03:24 PM
the a64 has one htt lane, which is bi-directional. 16bit, 2-way.

the FX have 2 dedicated lanes 16-bit one-way, although from what i understand, thay can be two-way, but i don't know for sure.

With the FX having dedicated lanes, this increases the overall possible bandwidth of the chip by huge ammounts...rather than 22.8gb switched bi-directional, it's 45.6gb possible. These seem like large numers, but the chipset has to support a multipler to get to the upper reaches...2000mhz on nforce4 is = to what, 20.0gb? easier for the FX to utilize the most of this bandwidth, because of the 2 lanes.

The opteron uses the third lane to connect to the second cpu, and only differs from the fx in this way. managing all the communcations explain the slight drop in performance from FX to opteron.

anyways, even the FX and opteron, limited by chipset as they are, are not using most of this bandwidth. so it makes sense to just drop in another core to make use of all that extra, and even though the crossbar is there, adding latency, keeping the 2 active lanes means that the small amt of latency should not matter, if you go to two lanes bi-directional, OR dedicated.

your saying the a64 has 1 ht link for up and downstream while the fx has one dedicated ht link for up and one for down? :confused:

and the performence difference between an opteron and an fx has to do with what? the opteron having to handle the system communication while the fx doesnt? sorry, i know thats not what your trying to explain, but i dont get it, lol :D

Vapor
03-28-2005, 03:40 PM
I thought it was all A64, FX, and Opteron 1xxs have one HT lane (bidirectional). 2xx's have two (one for the other CPU), and the 8xxs have three (two for the other CPUs).

cadaveca
03-28-2005, 03:46 PM
your saying the a64 has 1 ht link for up and downstream while the fx has one dedicated ht link for up and one for down? :confused:

and the performence difference between an opteron and an fx has to do with what? the opteron having to handle the system communication while the fx doesnt? sorry, i know thats not what your trying to explain, but i dont get it, lol :D


uh yeah, you got it already. at least this is the way that i understand it. I could be wrong, but it seems to make sense.


the biggest thing is the realization that HTT support is highly chipset dependant, and for these cpu's to work in current mobo's, the have to kind of fit the same HTT scheme.

the crossbar is only there becasue if this...they have to fit in current standards, or there is not going to be much of a market if everyone needs a new mobo for these cpu's...mind you i guess dell et al. are enough of a demand. but regardless, the crossbar only makes sense if they are sticking to the same HTT layout as is currently used.

the HTT pdf is located here:

http://www.hypertransport.org/docs/spec/HTC20031217-0036-0005.pdf

Anyway, the attached images very roughly show what i am saying, and the documents at both AMD and the hypertransport ocnsortium go into greater detail, but you already got the jist of it.


just remember that the opteron has only 1 additional pin...and what is that pin for?

saaya
03-28-2005, 03:59 PM
afaik the extra pin of the opteron is just to make a64s and opterons not plattform compatible.
939= desktop unreg memory
940= server reg memory

with reg memory they can run more memory at a higher speed, but the latest memory controllers run fine with a lot of ram at high speeds, thats why you can run opterons with unreg memory (if the bios/board supports it)

and afaik the only difference between the fx and the opteron is that the opteron (usually) uses reg memory wich means a performence drop in some situations.

and from my understanding its like vapor described.
a64/fx/1xx = 1
2xx=2
4xx/8xx=3

what "system communciation" do you mean does the opteron have to handle?

cadaveca
03-28-2005, 04:07 PM
so if the pin means nothing cpu wise, then what makes them different? not much.

prett sure the FX has 2 as well, but then again, i could be wrong. Look at he masking..there are 2 seperated HTT channels(masking is of the FX clawhammer)


system communication...on a multi threaded stream the processing power of a dual opteron is not directly = to the performance of two opterons, nor is the performance the same between two cpu's at the same speed. I guess this could be due to the ECC mem, but i was always under the impression that it was because of the HTT links.

the hypertanstport consortium also treats the FX as a different cpu from the standard A64, and the only thing that makes sense for them to do so is becasue of differences in HTT use.

like i said, i could be wrong, but this is how i understand it.

seems you guys are right...


Additionally, the AMD Opteron processor features three HyperTransport links, compared to the one HyperTransport link of the AMD Athlon FX processor. They are also tested to different electrical specifications.

i assumed from the masking that there were two channels...looks that way anyway..i guess one is the up the other the down, or something.

EMC2
03-28-2005, 11:55 PM
1) The crossbar switch exists in current single-core processors, all versions, as well as the coming dual-cores.

2) The 939-socket processor variants only bring one HT link to the outside world. There are extra signals brought out on the memory controller side of things (compared to the 940 socket CPUs).

3) The 940-socket processor variants have pins set aside for 3 HT links to the outside world. Only server versions have them wired up (2 and 3 link versions), "desktop" CPU version doesn't. They have fewer signals brought out from the memory controller. (compared to 939 socket CPUs)

4) In 939-socket and desktop 940 socket variants the single HT link is used to interface to the MB chipset.

5) In server versions, the master CPU uses one link to interface to the MB chipset and the other 2 links are used for interprocessor communication with other physical CPUs. For the "slave" CPUs, all links are used for interprocessor communications.

6) The differences in the two memory controller configurations deal with the way clocks and control strobes are configured, with the 939 socket variants having an A/B set of pins to decrease loading on the signals.

7) All HT links are bi-directional, non-duplexed (16 ins, 16 outs). The links can be configured for narrower configurations via internal CPU registers, 2,4,8,16.

8) Currently CPU-IDs are set aside for 16 CPUs and the HT link routing table for 8 route entries/nodes (i.e. 8 dual-core CPUs).

As AMD said, they have always been setup and ready for this ;)

I think that covers all the queries and hopefully eliminates the confusion :toast:

cadaveca
03-29-2005, 12:03 AM
3) The 940-socket processor variants have pins set aside for 3 HT links to the outside world. Only server versions have them wired up (2 and 3 link versions), "desktop" CPU version doesn't. They have fewer signals brought out from the memory controller. (compared to 939 socket CPUs)

AH, thanks. this is what i misunderstood. so the HTT link DOES interfere with the opterons a bit, but i totally did not get it right. LoL.

8cpu's seems like a wiring nightmare..how many chipsets for them...4? or 2? I saw the IWILL with the add-on board, but nnot enough to even see what was really going on besides the 4 cpus on the other board.

we will know when they are out when the bios updates for rev E cpu's come out...
http://www.xbitlabs.com/news/cpu/display/20050323135117.html

saaya
03-29-2005, 07:11 AM
cadeva, the fx is treated different, but its pretty much just an unlocked a64... its only marketing, theres nothing really different in this cpu compared to an a64 with 1mb L2 cache, other than the unlocked multis :D

EMC2, thx a lot :toast: didnt know the current cpus already have the x-switch inside, all 90nm cpus or already the 130nm cpus?
if they added it for the 90nm cpus that finally explains the slight performence boost over 130nm and why amd didnt comment about the boost :D

can you explain 3) in more detail again? :D

They (940 cus?) have fewer signals brought out from the memory controller. (compared to 939 socket CPUs)
dont really understand what you mean with fewer signals


5) In server versions, the master CPU uses one link to interface to the MB chipset and the other 2 links are used for interprocessor communication with other physical CPUs. For the "slave" CPUs, all links are used for interprocessor communications.
ahhhhh veryy interesting! now i get it, thx :toast:



6) The differences in the two memory controller configurations deal with the way clocks and control strobes are configured, with the 939 socket variants having an A/B set of pins to decrease loading on the signals.hmmm is this what you are referring to in 3) ?


8) Currently CPU-IDs are set aside for 16 CPUs and the HT link routing table for 8 route entries/nodes (i.e. 8 dual-core CPUs).
alright! you can read my mind :lol:
you answered a question id ditn even write down yet :D

yes, this all cleared a lot of confusion, at least the confusion i got in over the X2's in MP and the HT links assignment :)

do you think amd will add a 4th HT link to cpus? or doesnt that make sence? i dont know what the bandwidth figures are atm and how much bandwidth the cpus actually use in an 8 way system, and how much they will probably use in an 8 way x2 (16 way) system. do you think a 4th ht link would make more sence, or adding yet another one or two slave cpus to each master cpu?

this is all very interesting, once you get how its set up you get how easy and well thougt the MP array of a64s is set up. this is really a nice step in the future, now all amd has to do is tweak things some more and move more and more of this MP array technology onto one die... the a64 architecture looks really very future proof.

intels move from single to multi and many cores is confusing me instead.
smithfiled is 2 cores on one die wich actually has a crossbar switch and is seen as one cpu by the chipset (according to intel, but i dont know if thats true or marketing), then they make a step backwards again by going 2 single dies with each one core only with pressler with the chipset handling it like SMP-on-a-package, and then will HAVE to make 2 steps forward again to more cores on one die, with a crossbar switch again sooner or later.

because packaging costs for the single cores they stuck on one package get smaller and smaller yet keep the same amount of pins or even need more... and the more cores they add the more communication pins they need to add overall, if they dont use a crossbar switch... and the chipset has to handle the different cores, and the mobo would need extra traces as well... atm it looks very confusing to me what intel does... and how they plan to smoothly transition their line from single to dual and then multi cores...

STEvil
03-29-2005, 09:07 AM
FX is Opteron on 939, Saaya.

saaya
03-29-2005, 09:10 AM
FX is Opteron on 939, Saaya.
i thougt all opterons have 2 ht links? the fx only has one making it an a64 with unlocked multis, rather than an opteron on 939.

and opterons run with reg memory ... i think the fx is more an a64 with unlocked multis than an opteron on 939... but i guess you can argue about it since all 3 are based on 100% the same architecture and phsyical cores :lol:

cadaveca
03-29-2005, 09:40 AM
Additionally, the AMD Opteron processor features three HyperTransport links, compared to the one HyperTransport link of the AMD Athlon FX processor. They are also tested to different electrical specifications.

i think this says it.

STEvil
03-29-2005, 10:37 AM
FX uses the same core as the Opteron (Clawhammer), and Opteron does not require registered memory to work, provided the boards bios supports unbuffered (Abit's dual opteron nF chipset board for example.. WNIIS+ or something is the model).

saaya
03-29-2005, 04:16 PM
FX uses the same core as the Opteron (Clawhammer), and Opteron does not require registered memory to work, provided the boards bios supports unbuffered (Abit's dual opteron nF chipset board for example.. WNIIS+ or something is the model).

Clawhammer = desktop
Sledgehammer = server

but if you say the fx is is an opteron an a64 is just as much an opteron as well :D
as i said, its a topic that can be only argued about :D

in the end, does it mater at all if the fx is rather a opteron than a a64? :D

STEvil
03-29-2005, 04:52 PM
Again this comes down to cores.

FX 51, 53, 55, Opteron, Newcastle: Clawhammer (called Sledgehammer in 940 package)
Winchester 512k: Winchester.

I'm not actually sure if all newcastles use the clawhammer core or if they are also built on their own (my guess is they are built on their own and failed clawhammers become newcastle, with the newcastle core filling in when there arent enough failed claws)..

New San Diego and Venice cores only add to the mix..


Someone needs to upload pics of each core, I thought you had a thread up with pics from each (except winchester of course) a year or so ago saaya but I could have been mistaken...

EMC2
03-29-2005, 11:15 PM
Saya, you're welcome :)

All have had the crossbar in place, here's from AMD's 2001 briefing...

http://img8.exs.cx/img8/831/origarchitecture20017pc.th.png (http://img8.exs.cx/my.php?loc=img8&image=origarchitecture20017pc.png)

and here is from 2001, showing they already had the links in for dual-core ;)

http://img8.exs.cx/img8/4409/orignb20010ni.th.png (http://img8.exs.cx/my.php?loc=img8&image=orignb20010ni.png)

All of 3) in previous post was about 940 variants...


hmmm is this what you are referring to in 3) ?
Yes, 6) was an explanation of the difs in the mem signals between 939s and 940s.


alright! you can read my mind:lol:

Just my psychotic side coming out :p:

Regarding the Qs... rather than get into what they might do.... let's approach it this way :)

AMD has left reserved bits and unused register space in place in such a way that it would allow them to add a 4th link... as well as 16 more CPU IDs. That said, they would have to go to a different package to support another link (would need more pins).

The bandwidth available (6.4GB/s each direction, per link) with the current links is more than enough for interprocessor communication... in fact it is high enough to support different physical CPUs accessing each others memory space at almost the peak burst bandwidth of their own memory interface... and more than the real average bandwith... with some added latency of course ;) (think of unbuffered Sandra scores)

Whether or not that is enough for a given application, really depends on the app though. You don't really need a fourth link unless you start getting into hypercube type topologies, and a more effective way to up the bandwidth would be either higher speeds on the links or wider links (again, bits reserved for both cases).

Regarding more than 2 cores per physically CPU... they haven't yet shown the internal "hooks" for that and you start running into challenges in regards to power & thermal requirements, as well as die size (cost, yield, etc.). If those challenges were surmounted... That's one great thing about crossbar technology, you can always add more ports efficiently.

I would completely agree that AMD did a superb job in planning for the future and putting together a well thought out architecture :D The only real limits are ones dealing with physics.

Regarding the "I" word comments... chipsets don't care how many monkeys are on the other end of the line, they (chipsets) only care about how many lines are wired to them (i.e. if there is a single port between the chipset and something else or multiple ports). If a single port, all the chipset sees is a single requester/destination and the routing details are handled by the monkeys on the other end (processors).

I almost get the impression that they (Intel) were unprepared for the guerilla AMD set loose with the A64 architecture... and as of yet don't appear to have a well thought out plan. Time will tell.

Speaking of time... all this talk of monkeys has made me hungry, think I'll go grab a banana :lol:

cadaveca
03-29-2005, 11:43 PM
thanks for the pics..helps unmuddy the waters. But then you are saying my thoughts were right, although i had the hypertransport thing screwed up.

that's what i am saying...currently they have these links. this will not change with the dualcores, as they will fit in the same sockets...on the same boards. the way it's implemented, and the similarities to what we have currently, is highly intentional. that's why we have the crossbar controller, as doing it any other way WOULD require a new pinout...like the M2 socket, or whatever it's called, ot the 1066pin amd socket(dunno 'bout that one. may have just been an early revision of these cores or something).

what i was meaning was that a large part of the reason the crossbar is there in these cpu's is to allow for compatibility for future revisions, and dual/mult. The crossbar, onboard memory controller, and htt form just a framework for connection to the motherboard/chipset, and aren't really about the function of the cpu, if you know what i mean.


Regarding more than 2 cores per physically CPU... they haven't yet shown the internal "hooks" for that and you start running into challenges in regards to power & thermal requirements, as well as die size (cost, yield, etc.). If those challenges were surmounted... That's one great thing about crossbar technology, you can always add more ports efficiently.


it should be easy to implement more cores, isn't it? power and thermal requirements be damned...it's the cache that makes the difference, no?

From what i have read, AMD have had amazing foresight in thier businessplan. I mean really, there is a reason there are 939pins in thier packages, and only 775 in intel. That alone is enough to give AMD an edge. But you know way far more than I do...i just read stuff online...But can you really see AMD using anything other than a crossbar for interconnection? Isn't this almost a standard for multiprocessing? I don't understand completely, but i do know that you have to get all that numbercrunching together somehow, and something has to link to the memcontroller and hypertransport...but then there's hypertransport...do they not NEED a crossbar, in order to use hypertransport?

saaya
03-30-2005, 08:18 AM
Again this comes down to cores.

FX 51, 53, 55, Opteron, Newcastle: Clawhammer (called Sledgehammer in 940 package)
Winchester 512k: Winchester.

I'm not actually sure if all newcastles use the clawhammer core or if they are also built on their own (my guess is they are built on their own and failed clawhammers become newcastle, with the newcastle core filling in when there arent enough failed claws)..

New San Diego and Venice cores only add to the mix..


Someone needs to upload pics of each core, I thought you had a thread up with pics from each (except winchester of course) a year or so ago saaya but I could have been mistaken...

yepp, and i had a pic of a winchester core even back then already :D
newcastle is a different core with only 512kb l2 cache physically, the failed clawhammer chips with only 512kb l2 cache enabled are actually still labeled clawhammer in cpu-z and not newcastle. afaik almost all 512kb l2 cache a64s are newcastle cores and there are only few with clwhammer cores with half the cache disabled. i havent seen a clawhammer with half the cache disabled for months tbh...

and there are more cores, theres also an a64 core wich only has 256kb l2 cache phsyically made in 90nm wich is used for the mobile semprons and also for some desktop semprons.
and some of them even have half their cache disabled so they only have 128kb l2 cache. they still perform pretty good for the original celeron like sized l2 cache and they are super cheap (70€ here in germany and prices are still going down)

EMC2, thx for all those infos :toast:
so amd left themselves quite a lot of options of to add to the current technology as well... really nice n clean design... good engineering :up:

and about intel, well they told me that smithfield is hooked up to the system just like a single core cpu, while with pressler each cpu will be hooked up to the plattform like the current single core cpus.
but this means (or thats what i understand from it) that smithfield MUST have some sort of an x-switch like amd, but for some reason they are making a step backwards by removing the switch and giving each cpu its own interface for pressler.
the added bandwidth is nice, but wouldnt it make more sence to keep the switch and make the bus faster or wider? because they will HAVE to go for a switch anyways sooner or later if they keep adding more cores.

or is it the same to have two independant small buses or have one shared bus wich is two times as wide?
i thougt that one bus of double the width would be more efficient in case both cpus request the same data for example...

and about future cores, amd told me the next move will be 4 cores on one die, i guess in 65nm. they said going for one die is better because the cpus can be hooked up internally and you save pins that way, wich is becomming a problem with shrinking manufacturing sizes as the packages have to get finer and finer as well wich costs more money.

cadaveca, what do you mean with if they need a crossbar to use the hyper transport bus? all the communication from the cpus is hooked up to the crossbar, and the corssbar is hooked up to the cpus. afaik the two cpus can even access each others cache through the crossbar, but im not sure about that.

STEvil
03-30-2005, 09:02 AM
adding sempron in makes even more mess ;)

But anyways, some 3500+ and all 4000+ are both clawhammer.

EMC2
03-30-2005, 08:10 PM
EMC2, thx for all those infos :toast:
/me sends Saaya a bill :hehe:


and about intel, well they told me that smithfield is hooked up to the system just like a single core cpu, while with pressler each cpu will be hooked up to the plattform like the current single core cpus. but this means (or thats what i understand from it) that smithfield MUST have some sort of an x-switch like amd, They didn't necessarily have to be using a crossbar, could have simply had an arbiter.


but for some reason they are making a step backwards by removing the switch and giving each cpu its own interface for pressler.
Can we vote on the reason? :hehe: From the info I've seen, their is still only one interface, even though it is two seperate die (i.e. not 2 independent busses... still in same physical package, same pin count AFAIK).


the added bandwidth is nice, but wouldnt it make more sence to keep the switch and make the bus faster or wider? because they will HAVE to go for a switch anyways sooner or later if they keep adding more cores.
What added bandwidth? Regarding making more sense... don't forget that Intel still has the mem controller outboard, so everything funnels through the one interface ;) In general tho, no considering the appetite their CPUs have for bandwidth, none of their roadmap makes a ton of sense from here.


or is it the same to have two independant small buses or have one shared bus wich is two times as wide? i thougt that one bus of double the width would be more efficient in case both cpus request the same data for example...
Coherency could be handled on die with communication between the two cores... as long as the mem controller is outboard of the CPU, then two seperate interfaces for the cores would actually work better. Basically think of the mem controller, crossbar, and I/O interface that resides onboard for the A64 being all together in the outboard NB for the Intel cores. By having seperate busses, one could be accessing memory while the other could be accessing Gfx, HDs, etc., plus effecient combining and ordering of accesses could be handle in the outbound NB more effectively since it is where the mem controller and I/O interfaces are.


and about future cores, amd told me the next move will be 4 cores on one die, i guess in 65nm. they said going for one die is better because the cpus can be hooked up internally and you save pins that way, wich is becomming a problem with shrinking manufacturing sizes as the packages have to get finer and finer as well wich costs more money.
Their statement is true... it would be better. Just the challenges that I outlined in the previous post.

Peace :toast:

trance565
03-31-2005, 07:49 PM
ok, just wondering, wut would the over clocking be like? ive never used or seen a 2 cpu system personally, would u have to oc the cores seperate? or would they oc at the same time just having to mess with one setting?

cadaveca
03-31-2005, 08:04 PM
cadaveca, what do you mean with if they need a crossbar to use the hyper transport bus? all the communication from the cpus is hooked up to the crossbar, and the corssbar is hooked up to the cpus. afaik the two cpus can even access each others cache through the crossbar, but im not sure about that.

I am saying that in order to bring dual cores to market in a viable way(meaning in a way they can make money), the actual make-up of the processor from the crossbar to the chipset(crossbar, mem controller, hypertransport) has to remain the same.
You were asking about how they we going to connect the cpu's together...to me, the crossbar is how it will stay, until they change the socket(and probably long after that as well). Changing the socket right at this moment, when new chipsets have come out, does not make good business sense. Also, knowing that multicores were part of AMD's plan from the start, it kinda just makes sense. Everyone has been heralding the death of 754 for a while, but i have been one of few people advocating it's rise from the supposed "bargain bin", back to mainstream right alongside 939(becasue of hypertransport, the crossbar, the memory controller, and what it meant for thier future plans).


thier use of the crossbar allows them to change the actual functions of the cpu itself, but provides a framework with which future processors can use to connect to the mainboard as well, and just might bring about some standardization that will lead to lower costs in development and less compatibility issues. To me, this is the whole purpose of the HyperTransprot Consortium. Standardization of connectivity in the computing platform. But hey, i just predict the trends, and not the technicalities.


EMC2 says they could have used an arbiter, and really, i don't even know what that is, but i do remember reading AMD saying something about it not being able to conform to the uses they intended for the A64, and something about latency, but with how i had HT confused in the beginning, maybe i'm off my rocker a bit.

EMC2
04-01-2005, 05:20 AM
EMC2 says they could have used an arbiter, and really, i don't even know what that is, but i do remember reading AMD saying something about it not being able to conform to the uses they intended for the A64, and something about latency, but with how i had HT confused in the beginning, maybe i'm off my rocker a bit.

FYI... the arbiter comment was in regards to Intel's smithfield... AMD has more sense than that ;) (actually to be clear should have said, "by using an arbiter and a shared bus without a crossbar")

Regarding use of an arbiter - allows shared use of a resource by deciding who gets control of the resource. Doing it this way without using a crossbar, multiple devices (for example CPU cores) would request use of the bus from the arbiter, whoever is granted control would then drive the control lines to the resource and gain temporary exclusive control of it while accessing it. At some point the device would give up control, quit driving the control lines, and the next device, as decided by the arbiter, could take control. (same way that DMA resources are usually allocated). Slowdown occurs for several reasons, main one being the time it takes to switch from one "master" to the other. Device A has to relinquish control, get off the bus, Device B be told it has control, start driving the bus, then start it's accesses. Other drawback is that only one device can have use of any of the resources at any one time.

Maybe best way to show the differences I'm talking about would be a pic... but no time for that now, work soon.

Peace

saaya
04-01-2005, 07:17 AM
/me sends Saaya a bill :hehe: here you go :2cents: :D

They didn't necessarily have to be using a crossbar, could have simply had an arbiter. i thougt tis the same with a different name?

Can we vote on the reason? :hehe: From the info I've seen, their is still only one interface, even though it is two seperate die (i.e. not 2 independent busses... still in same physical package, same pin count AFAIK).hmmm but that would mean each core has its own interface plus an arbiter as there will also be single coe chips based on the same architecture. but then i dont get how they hook up the cpus...
hmmmm maybe one arbiter, lets say no1 is hooked up to the system and then shares the bus with cpu1 and the other arbiter wich forwards the infos to cpu2?

cpu2<-arbiter2<-arbiter1->cpu1


What added bandwidth? Regarding making more sense... don't forget that Intel still has the mem controller outboard, so everything funnels through the one interface ;) In general tho, no considering the appetite their CPUs have for bandwidth, none of their roadmap makes a ton of sense from here. hummmz wth was i talking about? hmmm added bandwidth? lol :D my bad :hitself:


Coherency could be handled on die with communication between the two cores... as long as the mem controller is outboard of the CPU, then two seperate interfaces for the cores would actually work better. Basically think of the mem controller, crossbar, and I/O interface that resides onboard for the A64 being all together in the outboard NB for the Intel cores. By having seperate busses, one could be accessing memory while the other could be accessing Gfx, HDs, etc., plus effecient combining and ordering of accesses could be handle in the outbound NB more effectively since it is where the mem controller and I/O interfaces are.
hmmm yeah, but if its done right a larger shared bus could be faster because you save bandwidth for data that both cpus need at the same time, no?
my bad, its very unlikely that the two cores will be working on the same thing and requesting the same data anyways since they wont be working on the same thing at the same time anyways :D


ok, just wondering, wut would the over clocking be like? ive never used or seen a 2 cpu system personally, would u have to oc the cores seperate? or would they oc at the same time just having to mess with one setting?
hey trance565, welcome to XtremeSystems :toast:
on a dual cpu system you have to clock both cpus at the same speed, i think thats necessary to keep them working efficiently together.

and about a dual core cpu, it would be the same.
but yonah can disable one cpu core if not needed, and i think they are planning to let yonahs successor even have independant vcore control and independant clockspeeds for each cpu.

thats what i thougt of once, and i told it to amd when i met them at cebit. i hope they liked the idear and will implement it. lets say you run an app that is heavily single threadded, then wouldnt it be cool if the dual core cpu would disable one of its cores? then it could clock its first core to a much higher speed still staying in the termal envelope and still using the same amount of energy wich would give you a much better result in that single threadded app than having two slower clocked cores, one pretty much idling and the other one crunching really hard. and as soon as you start another second demanding thread then the core1 clocks itself down to normal speeds and the core2 starts up again and you have two cpus crunching, each on one app :)

this would have to be controlled by the os though i think, as implementing it into a cpu would be very hard to configure i guess... but if it has to rely on the os it means MS, and we all know what this means... it would take years for them to implement it, and by then probably all apps would be highly multithreadded so it woudl be useless... :D

too bad...

cadaveca i still dont really get what your talking about, sorry :/

EMC2 ahhhh so thats an arbiter... you explained it really well, no need for a pic i think :)
the dma example is pefect, the arbiter is like having two hdds hooked on one bus, as slave and master drive, while the crossbar is like having both hdds hooked up in raid0 :)
more or less...

Orthogonal
04-01-2005, 11:45 AM
thats what i thougt of once, and i told it to amd when i met them at cebit. i hope they liked the idear and will implement it. lets say you run an app that is heavily single threadded, then wouldnt it be cool if the dual core cpu would disable one of its cores? then it could clock its first core to a much higher speed still staying in the termal envelope and still using the same amount of energy wich would give you a much better result in that single threadded app than having two slower clocked cores, one pretty much idling and the other one crunching really hard. and as soon as you start another second demanding thread then the core1 clocks itself down to normal speeds and the core2 starts up again and you have two cpus crunching, each on one app

I like that idea and it makes sense. The only reason they are scaling down the core frequencies in the first place is that it's much harder to guarantee two cores that are fabbed right next to each other to have a particular speed bin. It is very likely that when they are making one of these dual core processors rated at 2.4 Ghz, one of them runs just fine at that speed but one barely manages 2.0 Ghz. So they would have to speed bin it down to a 2.0Ghz processor. If this ever happens to be the case, it would be hard to implement what you say as the OS would have to know if one or both of the particular cores are capable of OCing for a single thread. :2cents:

Hombre
04-01-2005, 12:51 PM
Man these dual-core CPU's look pretty nice. I hope they won't have the temp bug.

trance565
04-01-2005, 04:05 PM
thats what i thougt of once, and i told it to amd when i met them at cebit. i hope they liked the idear and will implement it. lets say you run an app that is heavily single threadded, then wouldnt it be cool if the dual core cpu would disable one of its cores? then it could clock its first core to a much higher speed still staying in the termal envelope and still using the same amount of energy wich would give you a much better result in that single threadded app than having two slower clocked cores, one pretty much idling and the other one crunching really hard. and as soon as you start another second demanding thread then the core1 clocks itself down to normal speeds and the core2 starts up again and you have two cpus crunching, each on one app :)

this would have to be controlled by the os though i think, as implementing it into a cpu would be very hard to configure i guess... but if it has to rely on the os it means MS, and we all know what this means... it would take years for them to implement it, and by then probably all apps would be highly multithreadded so it woudl be useless... :D



lol, linux will get taht done really quick im sure lol, 6 months b4 windows mebbe? but ya, that would be a sweet concept

EMC2
04-01-2005, 10:28 PM
here you go :2cents: :D:p:


hmmm yeah, but if its done right a larger shared bus could be faster because you save bandwidth for data that both cpus need at the same time, no?

In a word, no :) Percentage of time same data needed for both CPUs at the same time... very small ... percentage time CPUs waiting on each other to get the heck out of the way... significantly larger ;)


EMC2 ahhhh so thats an arbiter... you explained it really well, no need for a pic i think :) the dma example is pefect, the arbiter is like having two hdds hooked on one bus, as slave and master drive, while the crossbar is like having both hdds hooked up in raid0 :) more or less...

Hmmmm... sorta, kinda :) I had already gen'd up a pic for the arbitrated shared bus before I read this, so you have to suffer thru it anyway :lol:

http://img8.exs.cx/img8/4999/arbitratedbus9bm.th.png (http://img8.exs.cx/my.php?loc=img8&image=arbitratedbus9bm.png)

Peace :toast:

saaya
04-02-2005, 05:18 AM
I like that idea and it makes sense. The only reason they are scaling down the core frequencies in the first place is that it's much harder to guarantee two cores that are fabbed right next to each other to have a particular speed bin. It is very likely that when they are making one of these dual core processors rated at 2.4 Ghz, one of them runs just fine at that speed but one barely manages 2.0 Ghz. So they would have to speed bin it down to a 2.0Ghz processor. If this ever happens to be the case, it would be hard to implement what you say as the OS would have to know if one or both of the particular cores are capable of OCing for a single thread. :2cents:

well the cores could be tested indepndantly by amd and speed binned independantly by amd, IF they implement an indepndant clock signal for each core :)


lol, linux will get taht done really quick im sure lol, 6 months b4 windows mebbe? but ya, that would be a sweet concept
nah, linux would have support 6 months after the feature has been anounced to be integrated into future cpus, and will be ready at release of the cpu that features this, while windows would have support ready when the NEXT generation after the cpu that first supported the feature comes out :lol:

and EMC2, sweet pic! heres some more money: :2cents: :D

EMC2
04-02-2005, 09:38 PM
well the cores could be tested indepndantly by amd and speed binned independantly by amd, IF they implement an indepndant clock signal for each core :)FYI... which wouldn't happen without a socket change ;) Remember that the dual-cores are suppose to be "drop-in" replacements for current CPUs.


EMC2, sweet pic! heres some more money: :2cents: :D:lol: Save your :2cents: for a dual-core later this year.

Here's one like the thread topic Saya. If I ever make it back over to your side of the pond you can buy me a cup of java.

http://img8.exs.cx/img8/5310/crossbarbus5yd.th.png (http://img8.exs.cx/my.php?loc=img8&image=crossbarbus5yd.png)

Peace :D

bachus_anonym
04-02-2005, 09:45 PM
@ EMC2

You provide very through information here...
I just can't help asking you what you do for a living or what is your background? Sorry, If I dig too deep

saaya
04-03-2005, 09:51 AM
If I ever make it back over to your side of the pond you can buy me a cup of java.

not some good fresh german beer? :hrhr: :lol:

EMC2
04-03-2005, 11:48 AM
You provide very through information here...
:rotf: hehehe... sorry, long story dealing with the "thorough" part, reminded me of a recent conversation at work... you sure you haven't talked to someone I have worked with or for ?


I just can't help asking you what you do for a living or what is your background? Sorry, If I dig too deepUnderstandable Q... no need to apologize Michal :D It's one of my least favorite topics (me) for conversation... but... will require PMs, YGPM.


--- Saaya ---

Come one bro, no tempting :p:

*momentarily drifts off down memory lane*

Maybe one... a nice cold glass boot full during Octoberfest :hehe:

Excuse my rustiness, has been a while... Ich wohne in Wehen für 3 Jahre, sehr vernarrte Gedachtnisse :toast:

saaya
04-03-2005, 12:26 PM
Come one bro, no tempting :p:

*momentarily drifts off down memory lane*

Maybe one... a nice cold glass boot full during Octoberfest :hehe:

Excuse my rustiness, has been a while... Ich wohne in Wehen für 3 Jahre, sehr vernarrte Gedachtnisse :toast:

im def going to this years octoberfest, lemme know when and where and you get your beer :D

lutjens
04-03-2005, 06:00 PM
What I want to see is how badly the dual core Opteron with NUMA spanks the dual core Xeon...the Opterons should be delivering quite the beating considering AMD's Hypertransport technology and the four cores vying for bandwidth.;)

saaya
04-04-2005, 01:20 PM
What I want to see is how badly the dual core Opteron with NUMA spanks the dual core Xeon...the Opterons should be delivering quite the beating considering AMD's Hypertransport technology and the four cores vying for bandwidth.;)

you mean a dual socket dual core board? :D

Orthogonal
04-04-2005, 01:24 PM
you mean a dual socket dual core board? :D

Wouldn't that be something, the board would have to be HUGE though... that's a lot of memory it would have to hold. :D

madgamer
04-04-2005, 04:10 PM
Now that the prelim "preview" results are all over the net regarding intels dual core chips, how do you guys thing the AMD dual cores will fare in comparison? At least they wont require another socket change. I'm personally betting that AMD might be able to squeeze more mhz out of their chips than intel initially due to the AMDs running with less wattage/heat to begin with. Intel was kind of running at the edge of meltdown to begin with, so sticking 2 chips on 1 die is only pushing that even closer IMO.

saaya
04-04-2005, 04:19 PM
Wouldn't that be something, the board would have to be HUGE though... that's a lot of memory it would have to hold. :D

you can use dual core opterons in any socket 940 mainboard... so you can use any dual socket 940 board and slap in dual core opterons, thats what he meant i think? a dual socket numa board with dual core chips in it :)