Page 3 of 4 FirstFirst 1234 LastLast
Results 51 to 75 of 87

Thread: amd dual core preview

  1. #51
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    I thought it was all A64, FX, and Opteron 1xxs have one HT lane (bidirectional). 2xx's have two (one for the other CPU), and the 8xxs have three (two for the other CPUs).

  2. #52
    I am Xtreme
    Join Date
    Mar 2005
    Location
    Edmonton, Alberta
    Posts
    4,594
    Quote Originally Posted by saaya
    your saying the a64 has 1 ht link for up and downstream while the fx has one dedicated ht link for up and one for down?

    and the performence difference between an opteron and an fx has to do with what? the opteron having to handle the system communication while the fx doesnt? sorry, i know thats not what your trying to explain, but i dont get it, lol

    uh yeah, you got it already. at least this is the way that i understand it. I could be wrong, but it seems to make sense.


    the biggest thing is the realization that HTT support is highly chipset dependant, and for these cpu's to work in current mobo's, the have to kind of fit the same HTT scheme.

    the crossbar is only there becasue if this...they have to fit in current standards, or there is not going to be much of a market if everyone needs a new mobo for these cpu's...mind you i guess dell et al. are enough of a demand. but regardless, the crossbar only makes sense if they are sticking to the same HTT layout as is currently used.

    the HTT pdf is located here:

    http://www.hypertransport.org/docs/s...-0036-0005.pdf

    Anyway, the attached images very roughly show what i am saying, and the documents at both AMD and the hypertransport ocnsortium go into greater detail, but you already got the jist of it.


    just remember that the opteron has only 1 additional pin...and what is that pin for?
    Last edited by cadaveca; 02-29-2008 at 03:12 AM.

  3. #53
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    afaik the extra pin of the opteron is just to make a64s and opterons not plattform compatible.
    939= desktop unreg memory
    940= server reg memory

    with reg memory they can run more memory at a higher speed, but the latest memory controllers run fine with a lot of ram at high speeds, thats why you can run opterons with unreg memory (if the bios/board supports it)

    and afaik the only difference between the fx and the opteron is that the opteron (usually) uses reg memory wich means a performence drop in some situations.

    and from my understanding its like vapor described.
    a64/fx/1xx = 1
    2xx=2
    4xx/8xx=3

    what "system communciation" do you mean does the opteron have to handle?

  4. #54
    I am Xtreme
    Join Date
    Mar 2005
    Location
    Edmonton, Alberta
    Posts
    4,594
    so if the pin means nothing cpu wise, then what makes them different? not much.

    prett sure the FX has 2 as well, but then again, i could be wrong. Look at he masking..there are 2 seperated HTT channels(masking is of the FX clawhammer)


    system communication...on a multi threaded stream the processing power of a dual opteron is not directly = to the performance of two opterons, nor is the performance the same between two cpu's at the same speed. I guess this could be due to the ECC mem, but i was always under the impression that it was because of the HTT links.

    the hypertanstport consortium also treats the FX as a different cpu from the standard A64, and the only thing that makes sense for them to do so is becasue of differences in HTT use.

    like i said, i could be wrong, but this is how i understand it.

    seems you guys are right...

    Additionally, the AMD Opteron processor features three HyperTransport links, compared to the one HyperTransport link of the AMD Athlon FX processor. They are also tested to different electrical specifications.
    i assumed from the masking that there were two channels...looks that way anyway..i guess one is the up the other the down, or something.
    Last edited by cadaveca; 03-28-2005 at 04:17 PM.

  5. #55
    Registered User
    Join Date
    Feb 2005
    Location
    The Outer Limits
    Posts
    795
    1) The crossbar switch exists in current single-core processors, all versions, as well as the coming dual-cores.

    2) The 939-socket processor variants only bring one HT link to the outside world. There are extra signals brought out on the memory controller side of things (compared to the 940 socket CPUs).

    3) The 940-socket processor variants have pins set aside for 3 HT links to the outside world. Only server versions have them wired up (2 and 3 link versions), "desktop" CPU version doesn't. They have fewer signals brought out from the memory controller. (compared to 939 socket CPUs)

    4) In 939-socket and desktop 940 socket variants the single HT link is used to interface to the MB chipset.

    5) In server versions, the master CPU uses one link to interface to the MB chipset and the other 2 links are used for interprocessor communication with other physical CPUs. For the "slave" CPUs, all links are used for interprocessor communications.

    6) The differences in the two memory controller configurations deal with the way clocks and control strobes are configured, with the 939 socket variants having an A/B set of pins to decrease loading on the signals.

    7) All HT links are bi-directional, non-duplexed (16 ins, 16 outs). The links can be configured for narrower configurations via internal CPU registers, 2,4,8,16.

    8) Currently CPU-IDs are set aside for 16 CPUs and the HT link routing table for 8 route entries/nodes (i.e. 8 dual-core CPUs).

    As AMD said, they have always been setup and ready for this

    I think that covers all the queries and hopefully eliminates the confusion

  6. #56
    I am Xtreme
    Join Date
    Mar 2005
    Location
    Edmonton, Alberta
    Posts
    4,594
    3) The 940-socket processor variants have pins set aside for 3 HT links to the outside world. Only server versions have them wired up (2 and 3 link versions), "desktop" CPU version doesn't. They have fewer signals brought out from the memory controller. (compared to 939 socket CPUs)
    AH, thanks. this is what i misunderstood. so the HTT link DOES interfere with the opterons a bit, but i totally did not get it right. LoL.

    8cpu's seems like a wiring nightmare..how many chipsets for them...4? or 2? I saw the IWILL with the add-on board, but nnot enough to even see what was really going on besides the 4 cpus on the other board.

    we will know when they are out when the bios updates for rev E cpu's come out...
    http://www.xbitlabs.com/news/cpu/dis...323135117.html

  7. #57
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    cadeva, the fx is treated different, but its pretty much just an unlocked a64... its only marketing, theres nothing really different in this cpu compared to an a64 with 1mb L2 cache, other than the unlocked multis

    EMC2, thx a lot didnt know the current cpus already have the x-switch inside, all 90nm cpus or already the 130nm cpus?
    if they added it for the 90nm cpus that finally explains the slight performence boost over 130nm and why amd didnt comment about the boost

    can you explain 3) in more detail again?
    They (940 cus?) have fewer signals brought out from the memory controller. (compared to 939 socket CPUs)
    dont really understand what you mean with fewer signals

    5) In server versions, the master CPU uses one link to interface to the MB chipset and the other 2 links are used for interprocessor communication with other physical CPUs. For the "slave" CPUs, all links are used for interprocessor communications.
    ahhhhh veryy interesting! now i get it, thx


    6) The differences in the two memory controller configurations deal with the way clocks and control strobes are configured, with the 939 socket variants having an A/B set of pins to decrease loading on the signals.
    hmmm is this what you are referring to in 3) ?

    8) Currently CPU-IDs are set aside for 16 CPUs and the HT link routing table for 8 route entries/nodes (i.e. 8 dual-core CPUs).
    alright! you can read my mind
    you answered a question id ditn even write down yet

    yes, this all cleared a lot of confusion, at least the confusion i got in over the X2's in MP and the HT links assignment

    do you think amd will add a 4th HT link to cpus? or doesnt that make sence? i dont know what the bandwidth figures are atm and how much bandwidth the cpus actually use in an 8 way system, and how much they will probably use in an 8 way x2 (16 way) system. do you think a 4th ht link would make more sence, or adding yet another one or two slave cpus to each master cpu?

    this is all very interesting, once you get how its set up you get how easy and well thougt the MP array of a64s is set up. this is really a nice step in the future, now all amd has to do is tweak things some more and move more and more of this MP array technology onto one die... the a64 architecture looks really very future proof.

    intels move from single to multi and many cores is confusing me instead.
    smithfiled is 2 cores on one die wich actually has a crossbar switch and is seen as one cpu by the chipset (according to intel, but i dont know if thats true or marketing), then they make a step backwards again by going 2 single dies with each one core only with pressler with the chipset handling it like SMP-on-a-package, and then will HAVE to make 2 steps forward again to more cores on one die, with a crossbar switch again sooner or later.

    because packaging costs for the single cores they stuck on one package get smaller and smaller yet keep the same amount of pins or even need more... and the more cores they add the more communication pins they need to add overall, if they dont use a crossbar switch... and the chipset has to handle the different cores, and the mobo would need extra traces as well... atm it looks very confusing to me what intel does... and how they plan to smoothly transition their line from single to dual and then multi cores...
    Last edited by saaya; 03-29-2005 at 07:19 AM.

  8. #58
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    FX is Opteron on 939, Saaya.

    All along the watchtower the watchmen watch the eternal return.

  9. #59
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by STEvil
    FX is Opteron on 939, Saaya.
    i thougt all opterons have 2 ht links? the fx only has one making it an a64 with unlocked multis, rather than an opteron on 939.

    and opterons run with reg memory ... i think the fx is more an a64 with unlocked multis than an opteron on 939... but i guess you can argue about it since all 3 are based on 100% the same architecture and phsyical cores

  10. #60
    I am Xtreme
    Join Date
    Mar 2005
    Location
    Edmonton, Alberta
    Posts
    4,594
    Additionally, the AMD Opteron processor features three HyperTransport links, compared to the one HyperTransport link of the AMD Athlon FX processor. They are also tested to different electrical specifications.
    i think this says it.

  11. #61
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    FX uses the same core as the Opteron (Clawhammer), and Opteron does not require registered memory to work, provided the boards bios supports unbuffered (Abit's dual opteron nF chipset board for example.. WNIIS+ or something is the model).

    All along the watchtower the watchmen watch the eternal return.

  12. #62
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by STEvil
    FX uses the same core as the Opteron (Clawhammer), and Opteron does not require registered memory to work, provided the boards bios supports unbuffered (Abit's dual opteron nF chipset board for example.. WNIIS+ or something is the model).
    Clawhammer = desktop
    Sledgehammer = server

    but if you say the fx is is an opteron an a64 is just as much an opteron as well
    as i said, its a topic that can be only argued about

    in the end, does it mater at all if the fx is rather a opteron than a a64?

  13. #63
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    Again this comes down to cores.

    FX 51, 53, 55, Opteron, Newcastle: Clawhammer (called Sledgehammer in 940 package)
    Winchester 512k: Winchester.

    I'm not actually sure if all newcastles use the clawhammer core or if they are also built on their own (my guess is they are built on their own and failed clawhammers become newcastle, with the newcastle core filling in when there arent enough failed claws)..

    New San Diego and Venice cores only add to the mix..


    Someone needs to upload pics of each core, I thought you had a thread up with pics from each (except winchester of course) a year or so ago saaya but I could have been mistaken...

    All along the watchtower the watchmen watch the eternal return.

  14. #64
    Registered User
    Join Date
    Feb 2005
    Location
    The Outer Limits
    Posts
    795
    Saya, you're welcome

    All have had the crossbar in place, here's from AMD's 2001 briefing...



    and here is from 2001, showing they already had the links in for dual-core



    All of 3) in previous post was about 940 variants...

    Quote Originally Posted by Saya
    hmmm is this what you are referring to in 3) ?
    Yes, 6) was an explanation of the difs in the mem signals between 939s and 940s.

    alright! you can read my mind
    Just my psychotic side coming out

    Regarding the Qs... rather than get into what they might do.... let's approach it this way

    AMD has left reserved bits and unused register space in place in such a way that it would allow them to add a 4th link... as well as 16 more CPU IDs. That said, they would have to go to a different package to support another link (would need more pins).

    The bandwidth available (6.4GB/s each direction, per link) with the current links is more than enough for interprocessor communication... in fact it is high enough to support different physical CPUs accessing each others memory space at almost the peak burst bandwidth of their own memory interface... and more than the real average bandwith... with some added latency of course (think of unbuffered Sandra scores)

    Whether or not that is enough for a given application, really depends on the app though. You don't really need a fourth link unless you start getting into hypercube type topologies, and a more effective way to up the bandwidth would be either higher speeds on the links or wider links (again, bits reserved for both cases).

    Regarding more than 2 cores per physically CPU... they haven't yet shown the internal "hooks" for that and you start running into challenges in regards to power & thermal requirements, as well as die size (cost, yield, etc.). If those challenges were surmounted... That's one great thing about crossbar technology, you can always add more ports efficiently.

    I would completely agree that AMD did a superb job in planning for the future and putting together a well thought out architecture The only real limits are ones dealing with physics.

    Regarding the "I" word comments... chipsets don't care how many monkeys are on the other end of the line, they (chipsets) only care about how many lines are wired to them (i.e. if there is a single port between the chipset and something else or multiple ports). If a single port, all the chipset sees is a single requester/destination and the routing details are handled by the monkeys on the other end (processors).

    I almost get the impression that they (Intel) were unprepared for the guerilla AMD set loose with the A64 architecture... and as of yet don't appear to have a well thought out plan. Time will tell.

    Speaking of time... all this talk of monkeys has made me hungry, think I'll go grab a banana
    Last edited by EMC2; 04-01-2005 at 10:31 PM. Reason: Corrected late night brain slog (HT is presently 16-bit on A64)

  15. #65
    I am Xtreme
    Join Date
    Mar 2005
    Location
    Edmonton, Alberta
    Posts
    4,594
    thanks for the pics..helps unmuddy the waters. But then you are saying my thoughts were right, although i had the hypertransport thing screwed up.
    Quote Originally Posted by cadaveca
    that's what i am saying...currently they have these links. this will not change with the dualcores, as they will fit in the same sockets...on the same boards. the way it's implemented, and the similarities to what we have currently, is highly intentional. that's why we have the crossbar controller, as doing it any other way WOULD require a new pinout...like the M2 socket, or whatever it's called, ot the 1066pin amd socket(dunno 'bout that one. may have just been an early revision of these cores or something).
    what i was meaning was that a large part of the reason the crossbar is there in these cpu's is to allow for compatibility for future revisions, and dual/mult. The crossbar, onboard memory controller, and htt form just a framework for connection to the motherboard/chipset, and aren't really about the function of the cpu, if you know what i mean.

    Quote Originally Posted by EMC2
    Regarding more than 2 cores per physically CPU... they haven't yet shown the internal "hooks" for that and you start running into challenges in regards to power & thermal requirements, as well as die size (cost, yield, etc.). If those challenges were surmounted... That's one great thing about crossbar technology, you can always add more ports efficiently.
    it should be easy to implement more cores, isn't it? power and thermal requirements be damned...it's the cache that makes the difference, no?

    From what i have read, AMD have had amazing foresight in thier businessplan. I mean really, there is a reason there are 939pins in thier packages, and only 775 in intel. That alone is enough to give AMD an edge. But you know way far more than I do...i just read stuff online...But can you really see AMD using anything other than a crossbar for interconnection? Isn't this almost a standard for multiprocessing? I don't understand completely, but i do know that you have to get all that numbercrunching together somehow, and something has to link to the memcontroller and hypertransport...but then there's hypertransport...do they not NEED a crossbar, in order to use hypertransport?

  16. #66
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by STEvil
    Again this comes down to cores.

    FX 51, 53, 55, Opteron, Newcastle: Clawhammer (called Sledgehammer in 940 package)
    Winchester 512k: Winchester.

    I'm not actually sure if all newcastles use the clawhammer core or if they are also built on their own (my guess is they are built on their own and failed clawhammers become newcastle, with the newcastle core filling in when there arent enough failed claws)..

    New San Diego and Venice cores only add to the mix..


    Someone needs to upload pics of each core, I thought you had a thread up with pics from each (except winchester of course) a year or so ago saaya but I could have been mistaken...
    yepp, and i had a pic of a winchester core even back then already
    newcastle is a different core with only 512kb l2 cache physically, the failed clawhammer chips with only 512kb l2 cache enabled are actually still labeled clawhammer in cpu-z and not newcastle. afaik almost all 512kb l2 cache a64s are newcastle cores and there are only few with clwhammer cores with half the cache disabled. i havent seen a clawhammer with half the cache disabled for months tbh...

    and there are more cores, theres also an a64 core wich only has 256kb l2 cache phsyically made in 90nm wich is used for the mobile semprons and also for some desktop semprons.
    and some of them even have half their cache disabled so they only have 128kb l2 cache. they still perform pretty good for the original celeron like sized l2 cache and they are super cheap (70€ here in germany and prices are still going down)

    EMC2, thx for all those infos
    so amd left themselves quite a lot of options of to add to the current technology as well... really nice n clean design... good engineering

    and about intel, well they told me that smithfield is hooked up to the system just like a single core cpu, while with pressler each cpu will be hooked up to the plattform like the current single core cpus.
    but this means (or thats what i understand from it) that smithfield MUST have some sort of an x-switch like amd, but for some reason they are making a step backwards by removing the switch and giving each cpu its own interface for pressler.
    the added bandwidth is nice, but wouldnt it make more sence to keep the switch and make the bus faster or wider? because they will HAVE to go for a switch anyways sooner or later if they keep adding more cores.

    or is it the same to have two independant small buses or have one shared bus wich is two times as wide?
    i thougt that one bus of double the width would be more efficient in case both cpus request the same data for example...

    and about future cores, amd told me the next move will be 4 cores on one die, i guess in 65nm. they said going for one die is better because the cpus can be hooked up internally and you save pins that way, wich is becomming a problem with shrinking manufacturing sizes as the packages have to get finer and finer as well wich costs more money.

    cadaveca, what do you mean with if they need a crossbar to use the hyper transport bus? all the communication from the cpus is hooked up to the crossbar, and the corssbar is hooked up to the cpus. afaik the two cpus can even access each others cache through the crossbar, but im not sure about that.
    Last edited by saaya; 03-30-2005 at 08:22 AM.

  17. #67
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    adding sempron in makes even more mess

    But anyways, some 3500+ and all 4000+ are both clawhammer.

    All along the watchtower the watchmen watch the eternal return.

  18. #68
    Registered User
    Join Date
    Feb 2005
    Location
    The Outer Limits
    Posts
    795
    Quote Originally Posted by saaya
    EMC2, thx for all those infos
    /me sends Saaya a bill

    and about intel, well they told me that smithfield is hooked up to the system just like a single core cpu, while with pressler each cpu will be hooked up to the plattform like the current single core cpus. but this means (or thats what i understand from it) that smithfield MUST have some sort of an x-switch like amd,
    They didn't necessarily have to be using a crossbar, could have simply had an arbiter.

    but for some reason they are making a step backwards by removing the switch and giving each cpu its own interface for pressler.
    Can we vote on the reason? From the info I've seen, their is still only one interface, even though it is two seperate die (i.e. not 2 independent busses... still in same physical package, same pin count AFAIK).

    the added bandwidth is nice, but wouldnt it make more sence to keep the switch and make the bus faster or wider? because they will HAVE to go for a switch anyways sooner or later if they keep adding more cores.
    What added bandwidth? Regarding making more sense... don't forget that Intel still has the mem controller outboard, so everything funnels through the one interface In general tho, no considering the appetite their CPUs have for bandwidth, none of their roadmap makes a ton of sense from here.

    or is it the same to have two independant small buses or have one shared bus wich is two times as wide? i thougt that one bus of double the width would be more efficient in case both cpus request the same data for example...
    Coherency could be handled on die with communication between the two cores... as long as the mem controller is outboard of the CPU, then two seperate interfaces for the cores would actually work better. Basically think of the mem controller, crossbar, and I/O interface that resides onboard for the A64 being all together in the outboard NB for the Intel cores. By having seperate busses, one could be accessing memory while the other could be accessing Gfx, HDs, etc., plus effecient combining and ordering of accesses could be handle in the outbound NB more effectively since it is where the mem controller and I/O interfaces are.

    and about future cores, amd told me the next move will be 4 cores on one die, i guess in 65nm. they said going for one die is better because the cpus can be hooked up internally and you save pins that way, wich is becomming a problem with shrinking manufacturing sizes as the packages have to get finer and finer as well wich costs more money.
    Their statement is true... it would be better. Just the challenges that I outlined in the previous post.

    Peace

  19. #69
    Xtreme Enthusiast
    Join Date
    Mar 2005
    Location
    houston TX USA
    Posts
    875
    ok, just wondering, wut would the over clocking be like? ive never used or seen a 2 cpu system personally, would u have to oc the cores seperate? or would they oc at the same time just having to mess with one setting?

  20. #70
    I am Xtreme
    Join Date
    Mar 2005
    Location
    Edmonton, Alberta
    Posts
    4,594
    Quote Originally Posted by saaya
    cadaveca, what do you mean with if they need a crossbar to use the hyper transport bus? all the communication from the cpus is hooked up to the crossbar, and the corssbar is hooked up to the cpus. afaik the two cpus can even access each others cache through the crossbar, but im not sure about that.
    I am saying that in order to bring dual cores to market in a viable way(meaning in a way they can make money), the actual make-up of the processor from the crossbar to the chipset(crossbar, mem controller, hypertransport) has to remain the same.
    You were asking about how they we going to connect the cpu's together...to me, the crossbar is how it will stay, until they change the socket(and probably long after that as well). Changing the socket right at this moment, when new chipsets have come out, does not make good business sense. Also, knowing that multicores were part of AMD's plan from the start, it kinda just makes sense. Everyone has been heralding the death of 754 for a while, but i have been one of few people advocating it's rise from the supposed "bargain bin", back to mainstream right alongside 939(becasue of hypertransport, the crossbar, the memory controller, and what it meant for thier future plans).


    thier use of the crossbar allows them to change the actual functions of the cpu itself, but provides a framework with which future processors can use to connect to the mainboard as well, and just might bring about some standardization that will lead to lower costs in development and less compatibility issues. To me, this is the whole purpose of the HyperTransprot Consortium. Standardization of connectivity in the computing platform. But hey, i just predict the trends, and not the technicalities.


    EMC2 says they could have used an arbiter, and really, i don't even know what that is, but i do remember reading AMD saying something about it not being able to conform to the uses they intended for the A64, and something about latency, but with how i had HT confused in the beginning, maybe i'm off my rocker a bit.
    Last edited by cadaveca; 03-31-2005 at 08:12 PM.

  21. #71
    Registered User
    Join Date
    Feb 2005
    Location
    The Outer Limits
    Posts
    795
    Quote Originally Posted by cadaveca
    EMC2 says they could have used an arbiter, and really, i don't even know what that is, but i do remember reading AMD saying something about it not being able to conform to the uses they intended for the A64, and something about latency, but with how i had HT confused in the beginning, maybe i'm off my rocker a bit.
    FYI... the arbiter comment was in regards to Intel's smithfield... AMD has more sense than that (actually to be clear should have said, "by using an arbiter and a shared bus without a crossbar")

    Regarding use of an arbiter - allows shared use of a resource by deciding who gets control of the resource. Doing it this way without using a crossbar, multiple devices (for example CPU cores) would request use of the bus from the arbiter, whoever is granted control would then drive the control lines to the resource and gain temporary exclusive control of it while accessing it. At some point the device would give up control, quit driving the control lines, and the next device, as decided by the arbiter, could take control. (same way that DMA resources are usually allocated). Slowdown occurs for several reasons, main one being the time it takes to switch from one "master" to the other. Device A has to relinquish control, get off the bus, Device B be told it has control, start driving the bus, then start it's accesses. Other drawback is that only one device can have use of any of the resources at any one time.

    Maybe best way to show the differences I'm talking about would be a pic... but no time for that now, work soon.

    Peace

  22. #72
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by EMC2
    /me sends Saaya a bill
    here you go
    Quote Originally Posted by EMC2
    They didn't necessarily have to be using a crossbar, could have simply had an arbiter.
    i thougt tis the same with a different name?
    Quote Originally Posted by EMC2
    Can we vote on the reason? From the info I've seen, their is still only one interface, even though it is two seperate die (i.e. not 2 independent busses... still in same physical package, same pin count AFAIK).
    hmmm but that would mean each core has its own interface plus an arbiter as there will also be single coe chips based on the same architecture. but then i dont get how they hook up the cpus...
    hmmmm maybe one arbiter, lets say no1 is hooked up to the system and then shares the bus with cpu1 and the other arbiter wich forwards the infos to cpu2?

    cpu2<-arbiter2<-arbiter1->cpu1

    Quote Originally Posted by EMC2
    What added bandwidth? Regarding making more sense... don't forget that Intel still has the mem controller outboard, so everything funnels through the one interface In general tho, no considering the appetite their CPUs have for bandwidth, none of their roadmap makes a ton of sense from here.
    hummmz wth was i talking about? hmmm added bandwidth? lol my bad

    Quote Originally Posted by EMC2
    Coherency could be handled on die with communication between the two cores... as long as the mem controller is outboard of the CPU, then two seperate interfaces for the cores would actually work better. Basically think of the mem controller, crossbar, and I/O interface that resides onboard for the A64 being all together in the outboard NB for the Intel cores. By having seperate busses, one could be accessing memory while the other could be accessing Gfx, HDs, etc., plus effecient combining and ordering of accesses could be handle in the outbound NB more effectively since it is where the mem controller and I/O interfaces are.
    hmmm yeah, but if its done right a larger shared bus could be faster because you save bandwidth for data that both cpus need at the same time, no?
    my bad, its very unlikely that the two cores will be working on the same thing and requesting the same data anyways since they wont be working on the same thing at the same time anyways

    Quote Originally Posted by trance565
    ok, just wondering, wut would the over clocking be like? ive never used or seen a 2 cpu system personally, would u have to oc the cores seperate? or would they oc at the same time just having to mess with one setting?
    hey trance565, welcome to XtremeSystems
    on a dual cpu system you have to clock both cpus at the same speed, i think thats necessary to keep them working efficiently together.

    and about a dual core cpu, it would be the same.
    but yonah can disable one cpu core if not needed, and i think they are planning to let yonahs successor even have independant vcore control and independant clockspeeds for each cpu.

    thats what i thougt of once, and i told it to amd when i met them at cebit. i hope they liked the idear and will implement it. lets say you run an app that is heavily single threadded, then wouldnt it be cool if the dual core cpu would disable one of its cores? then it could clock its first core to a much higher speed still staying in the termal envelope and still using the same amount of energy wich would give you a much better result in that single threadded app than having two slower clocked cores, one pretty much idling and the other one crunching really hard. and as soon as you start another second demanding thread then the core1 clocks itself down to normal speeds and the core2 starts up again and you have two cpus crunching, each on one app

    this would have to be controlled by the os though i think, as implementing it into a cpu would be very hard to configure i guess... but if it has to rely on the os it means MS, and we all know what this means... it would take years for them to implement it, and by then probably all apps would be highly multithreadded so it woudl be useless...

    too bad...

    cadaveca i still dont really get what your talking about, sorry :/

    EMC2 ahhhh so thats an arbiter... you explained it really well, no need for a pic i think
    the dma example is pefect, the arbiter is like having two hdds hooked on one bus, as slave and master drive, while the crossbar is like having both hdds hooked up in raid0
    more or less...

  23. #73
    Xtreme Member
    Join Date
    Sep 2004
    Posts
    147
    thats what i thougt of once, and i told it to amd when i met them at cebit. i hope they liked the idear and will implement it. lets say you run an app that is heavily single threadded, then wouldnt it be cool if the dual core cpu would disable one of its cores? then it could clock its first core to a much higher speed still staying in the termal envelope and still using the same amount of energy wich would give you a much better result in that single threadded app than having two slower clocked cores, one pretty much idling and the other one crunching really hard. and as soon as you start another second demanding thread then the core1 clocks itself down to normal speeds and the core2 starts up again and you have two cpus crunching, each on one app
    I like that idea and it makes sense. The only reason they are scaling down the core frequencies in the first place is that it's much harder to guarantee two cores that are fabbed right next to each other to have a particular speed bin. It is very likely that when they are making one of these dual core processors rated at 2.4 Ghz, one of them runs just fine at that speed but one barely manages 2.0 Ghz. So they would have to speed bin it down to a 2.0Ghz processor. If this ever happens to be the case, it would be hard to implement what you say as the OS would have to know if one or both of the particular cores are capable of OCing for a single thread.

  24. #74
    Xtreme Member
    Join Date
    Dec 2004
    Posts
    357
    Man these dual-core CPU's look pretty nice. I hope they won't have the temp bug.

  25. #75
    Xtreme Enthusiast
    Join Date
    Mar 2005
    Location
    houston TX USA
    Posts
    875

    Smile

    Quote Originally Posted by saaya


    thats what i thougt of once, and i told it to amd when i met them at cebit. i hope they liked the idear and will implement it. lets say you run an app that is heavily single threadded, then wouldnt it be cool if the dual core cpu would disable one of its cores? then it could clock its first core to a much higher speed still staying in the termal envelope and still using the same amount of energy wich would give you a much better result in that single threadded app than having two slower clocked cores, one pretty much idling and the other one crunching really hard. and as soon as you start another second demanding thread then the core1 clocks itself down to normal speeds and the core2 starts up again and you have two cpus crunching, each on one app

    this would have to be controlled by the os though i think, as implementing it into a cpu would be very hard to configure i guess... but if it has to rely on the os it means MS, and we all know what this means... it would take years for them to implement it, and by then probably all apps would be highly multithreadded so it woudl be useless...
    lol, linux will get taht done really quick im sure lol, 6 months b4 windows mebbe? but ya, that would be a sweet concept

Page 3 of 4 FirstFirst 1234 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •