Results 1 to 17 of 17

Thread: [News] Intel Core i7-8086K Listed, First 5.00 GHz Processor

  1. #1
    Join XS BOINC Team StyM's Avatar
    Join Date
    Mar 2006
    Location
    Tropics
    Posts
    16,513

    [News] Intel Core i7-8086K Listed, First 5.00 GHz Processor

    https://www.techpowerup.com/244673/i...-ghz-processor

    Intel is commemorating 40 years of its 8086 processor, the spiritual ancestor of the x86 machine architecture that rules modern computing, with a special edition socket LGA1151 processor, dubbed Core i7-8086K. The chip appears to feature a nominal clock speed of 4.00 GHz, with a maximum Turbo Boost frequency of 5.00 GHz, making it the first mainstream desktop processor from Intel to hit the 5.00 GHz mark, out of the box.

    The Core i7-8086K is more likely to be based on a special bin of the 14 nm, 6-core/12-thread "Coffee Lake" silicon, rather than being something next-gen or 8-core. The retail SKU bears the part number "BX80684I78086K." The chip will be compatible with Intel 300-series chipset motherboards. Pre-launch listings put its price around $486, which is along expected lines, as it's 70-100 EUR pricier than the i7-8700K. Intel could unveil the Core i7-8086K at the 2018 Computex (specifically on the 8th of June), alongside the first motherboards based on its Z390 Express chipset.

  2. #2
    Xtreme X.I.P. Particle's Avatar
    Join Date
    Apr 2008
    Location
    Kansas
    Posts
    3,218
    If the title is going to make a point of saying 5.0 GHz is a first, it needs to be qualified. "First 5.0 GHz processor from Intel" is accurate. If talking about x86 at large, AMD released a 4.7 GHz base / 5.0 GHz turbo product five years ago in 2013. If talking about computer processors in general, the earliest one I'm personally aware of would be IBM's POWER6 from 2007/2008. The point though is just that the title isn't accurate despite the article being so since it states "the first mainstream desktop processor from Intel to hit the 5.00 GHz mark, out of the box".
    Particle's First Rule of Online Technical Discussion:
    As a thread about any computer related subject has its length approach infinity, the likelihood and inevitability of a poorly constructed AMD vs. Intel fight also exponentially increases.

    Rule 1A:
    Likewise, the frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.

    Rule 2:
    When confronted with a post that is contrary to what a poster likes, believes, or most often wants to be correct, the poster will pick out only minor details that are largely irrelevant in an attempt to shut out the conflicting idea. The core of the post will be left alone since it isn't easy to contradict what the person is actually saying.

    Rule 2A:
    When a poster cannot properly refute a post they do not like (as described above), the poster will most likely invent fictitious counter-points and/or begin to attack the other's credibility in feeble ways that are dramatic but irrelevant. Do not underestimate this tactic, as in the online world this will sway many observers. Do not forget: Correctness is decided only by what is said last, the most loudly, or with greatest repetition.

    Rule 3:
    When it comes to computer news, 70% of Internet rumors are outright fabricated, 20% are inaccurate enough to simply be discarded, and about 10% are based in reality. Grains of salt--become familiar with them.

    Remember: When debating online, everyone else is ALWAYS wrong if they do not agree with you!

    Random Tip o' the Whatever
    You just can't win. If your product offers feature A instead of B, people will moan how A is stupid and it didn't offer B. If your product offers B instead of A, they'll likewise complain and rant about how anyone's retarded cousin could figure out A is what the market wants.

  3. #3
    Xtreme Addict
    Join Date
    Nov 2006
    Posts
    1,401
    5ghz first processor should be base clock too.

  4. #4
    Xtreme Monster
    Join Date
    May 2006
    Location
    United Kingdom
    Posts
    2,182
    Quote Originally Posted by Particle View Post
    If the title is going to make a point of saying 5.0 GHz is a first, it needs to be qualified. "First 5.0 GHz processor from Intel" is accurate. If talking about x86 at large, AMD released a 4.7 GHz base / 5.0 GHz turbo product five years ago in 2013. If talking about computer processors in general, the earliest one I'm personally aware of would be IBM's POWER6 from 2007/2008. The point though is just that the title isn't accurate despite the article being so since it states "the first mainstream desktop processor from Intel to hit the 5.00 GHz mark, out of the box".
    Intel wants a free marketing ad, it hits 5 ghz out of box with just one core, so is pointless, to be precise, all cores should hit 5ghz and stay at that frequency. As it stands, 4.4 ghz for all cores, so is 100mhz more than the i7 8700k, so nothing to see here.
    Last edited by Metroid; 05-31-2018 at 10:46 PM.

  5. #5
    Xtreme Addict
    Join Date
    Dec 2004
    Location
    Flying through Space, with armoire, Armoire of INVINCIBILATAAAAY!
    Posts
    1,963
    Quote Originally Posted by madcho View Post
    5ghz first processor should be base clock too.
    base clock is completely meaningless these days, since your CPU will never actually run at the base clock. what you really care about is the all-core turbo.
    Sigs are obnoxious.

  6. #6
    Xtreme Addict
    Join Date
    Nov 2006
    Posts
    1,401
    it runs turbo 5 seconds ... overheat, then go base clock

  7. #7
    Xtreme Mentor dengyong's Avatar
    Join Date
    Nov 2006
    Location
    A great place again
    Posts
    2,589
    Quote Originally Posted by madcho View Post
    it runs turbo 5 seconds ... overheat, then go base clock
    Do you own one of these? How long have you been testing it and under what conditions?

  8. #8
    Xtreme Addict
    Join Date
    Nov 2006
    Posts
    1,401
    No, I don't.
    I am electronic engineer, and I understand well the path they took to do this.
    They increase base clock with temperature/power limits and sensors, maybe internal electronic noise or delays too.

    They use an built in microcontroler for doing that, they have implemented some advanced/fancy profiles.
    Same principle with throttle function, when temperature/power is too high, they drop frequency under base clock. It was done a lot in Pentium 4. The problem that intel was sued for selling processors that can't keep up with base frequency.
    They done different after that with a new way and they called it "Turbo". Base clock is lower, and Turbo clock aren't garanted ...
    Today real average frequency now is more and more about dependent on process, and even mobo design.



    Yeah mobo design is still important now, less and less off course, but when it was old time of pentium 4, the power stage that drived the hungry beast wasn't as efficient as today. A lot of heat from the power stage for the CPU, warm up air close to the CPU, and reduced the processor cooling performance.

    Power stages are much more efficient today. However on a mobo, the CPU socket placement is still important today. The path of the air affect cooling performances, and so the real average clock.


    On first generations with turbo it was around 1 to 5 seconds. Now it's maybe better, not because technique is improved, but because
    1/ much more cores in the CPUs, so just one core at turbo can last long, full cores on turbo (with a cpu burn tool) it will never last long of course. Yeah they designed the base clock based on full cores without throttle and a small margin. It's obvious that if that could do better with all cores they would have did it on base clock ...
    2/ process is a lot improved, helps a lot on thermal. Core are smaller and smaller with a lot of dark silicon. Easier to dissipate more with dark silicon, you just surround the cores with dark silicon.
    3/ architecture improvements; with reduced base clock for same performances (better IPC), means less dynamic power consumption means cooler CPU. By More the frequency is far from the invicible wall of 5ghz clock, more it's "easy" to get large turbo clocks.
    4/ Improvements in data transmission on the chip. New technologies came since first pentium. serials links with differentials pairs are everywhere now. It's easier to improve clock when the full chip is not going at the same clock, but every block has it's own clock domain. Much easier to synchronize.

    Maybe some parts aren't totally exact, it's difficult to have real technical data on this. You can bet it's garded like a graal in companies.

    If you find a processor that can keep up over 10 minutes with all cores at turbo frequency, (base clock is not very low and there is not a plane fan in front of it), I would be chock. It's just thermal electronics. It's same on a 0 to 5W processors (what I work with), than 100W. the math isn't different.

    And off course I would be interested to read deep test on what it has become today, if there is any.

  9. #9
    Xtreme Mentor dengyong's Avatar
    Join Date
    Nov 2006
    Location
    A great place again
    Posts
    2,589
    Quote Originally Posted by madcho View Post
    No, I don't.
    I am electronic engineer, and I understand well the path they took to do this.
    They increase base clock with temperature/power limits and sensors, maybe internal electronic noise or delays too.

    They use an built in microcontroler for doing that, they have implemented some advanced/fancy profiles.
    Same principle with throttle function, when temperature/power is too high, they drop frequency under base clock. It was done a lot in Pentium 4. The problem that intel was sued for selling processors that can't keep up with base frequency.
    They done different after that with a new way and they called it "Turbo". Base clock is lower, and Turbo clock aren't garanted ...
    Today real average frequency now is more and more about dependent on process, and even mobo design.



    Yeah mobo design is still important now, less and less off course, but when it was old time of pentium 4, the power stage that drived the hungry beast wasn't as efficient as today. A lot of heat from the power stage for the CPU, warm up air close to the CPU, and reduced the processor cooling performance.

    Power stages are much more efficient today. However on a mobo, the CPU socket placement is still important today. The path of the air affect cooling performances, and so the real average clock.


    On first generations with turbo it was around 1 to 5 seconds. Now it's maybe better, not because technique is improved, but because
    1/ much more cores in the CPUs, so just one core at turbo can last long, full cores on turbo (with a cpu burn tool) it will never last long of course. Yeah they designed the base clock based on full cores without throttle and a small margin. It's obvious that if that could do better with all cores they would have did it on base clock ...
    2/ process is a lot improved, helps a lot on thermal. Core are smaller and smaller with a lot of dark silicon. Easier to dissipate more with dark silicon, you just surround the cores with dark silicon.
    3/ architecture improvements; with reduced base clock for same performances (better IPC), means less dynamic power consumption means cooler CPU. By More the frequency is far from the invicible wall of 5ghz clock, more it's "easy" to get large turbo clocks.
    4/ Improvements in data transmission on the chip. New technologies came since first pentium. serials links with differentials pairs are everywhere now. It's easier to improve clock when the full chip is not going at the same clock, but every block has it's own clock domain. Much easier to synchronize.

    Maybe some parts aren't totally exact, it's difficult to have real technical data on this. You can bet it's garded like a graal in companies.

    If you find a processor that can keep up over 10 minutes with all cores at turbo frequency, (base clock is not very low and there is not a plane fan in front of it), I would be chock. It's just thermal electronics. It's same on a 0 to 5W processors (what I work with), than 100W. the math isn't different.

    And off course I would be interested to read deep test on what it has become today, if there is any.
    I have never experienced this problem but I do use high end water cooling on all my builds.

  10. #10
    Xtremely High Voltage Sparky's Avatar
    Join Date
    Mar 2006
    Location
    Ohio, USA
    Posts
    16,048
    Well, yeah, the better your cooling the longer/higher the turbo can run.
    The Cardboard Master
    Crunch with us, the XS WCG team
    Intel Core i7 2600k @ 4.5GHz, 16GB DDR3-1600, Radeon 7950 @ 1000/1250, Win 10 Pro x64

  11. #11
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,956
    Quote Originally Posted by madcho View Post
    On first generations with turbo it was around 1 to 5 seconds. Now it's maybe better, not because technique is improved, but because
    1/ much more cores in the CPUs, so just one core at turbo can last long, full cores on turbo (with a cpu burn tool) it will never last long of course. Yeah they designed the base clock based on full cores without throttle and a small margin. It's obvious that if that could do better with all cores they would have did it on base clock ...
    2/ process is a lot improved, helps a lot on thermal. Core are smaller and smaller with a lot of dark silicon. Easier to dissipate more with dark silicon, you just surround the cores with dark silicon.
    3/ architecture improvements; with reduced base clock for same performances (better IPC), means less dynamic power consumption means cooler CPU. By More the frequency is far from the invicible wall of 5ghz clock, more it's "easy" to get large turbo clocks.
    4/ Improvements in data transmission on the chip. New technologies came since first pentium. serials links with differentials pairs are everywhere now. It's easier to improve clock when the full chip is not going at the same clock, but every block has it's own clock domain. Much easier to synchronize.

    Maybe some parts aren't totally exact, it's difficult to have real technical data on this. You can bet it's garded like a graal in companies.

    If you find a processor that can keep up over 10 minutes with all cores at turbo frequency, (base clock is not very low and there is not a plane fan in front of it), I would be chock. It's just thermal electronics. It's same on a 0 to 5W processors (what I work with), than 100W. the math isn't different.

    And off course I would be interested to read deep test on what it has become today, if there is any.
    I agree with virtually all that you said, but I would like to point out that the newer silicon isn't necessarily always better from a leakage perspective. If I recall correctly, Ivy Bridge had serious issues due to the 22nm process being very leaky at high clocks/temps. FinFET obviously helps a lot, but it's hard to say how turbo speed will be impacted as we go to smaller and smaller processes. I think there's realistically going to be a point in the near future where we need to significantly redo the dielectric materials.
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  12. #12
    Xtreme Addict
    Join Date
    Dec 2004
    Location
    Flying through Space, with armoire, Armoire of INVINCIBILATAAAAY!
    Posts
    1,963
    Isn't necessarily better? That's an interesting way of saying "always worse".
    Sigs are obnoxious.

  13. #13
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,956
    Quote Originally Posted by iddqd View Post
    Isn't necessarily better? That's an interesting way of saying "always worse".
    It's not always worse. 14nm FinFET process > 22nm process without FinFET. It's not as simple as saying things will always get worse as the gates get smaller. The actual engineering of the gates themselves matters too.
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  14. #14
    Xtreme Addict
    Join Date
    Dec 2004
    Location
    Flying through Space, with armoire, Armoire of INVINCIBILATAAAAY!
    Posts
    1,963
    That's why they had to come up with finFET - with planar transistors, the gate leakage and sub-threshold leakage would be unreasonable. But in general, as you make the channel shorter, it is more vulnerable to the aptly-named short-channel problems. All channels suffer from imperfections around the corners (so, close to each depletion region); but with a long-enough channel it doesn't matter; the strong E-field imposed by the gate will keep things under control in the middle of the gate, and the imperfections around the edges don't really matter. As you shorten the channel, two things occur: (1) the E-field is weaker, as there's less gate now, and (2) the imperfection regions are brought closer together.

    As the channel becomes very short, the two imperfection regions start to overlap, which causes disastrous effects; eg: punch-through - at this point, you are leaking 100% of your current from source to drain, and the gate is unable to control the channel at all! The transistor no longer functions as a switch.

    FinFET attempts to deal with this problem by building the channel vertically; where it was previously a flat piece of doped silicon resting between two depletion regions, it now sticks out of the substrate vertically, with the gate wrapping around it. This achieves two things: (1) we once again have more gate surface area - which can put up a stronger E-field to better control the current, and (2) we bring as much of the channel as possible away from the troublesome depletion regions, thereby (partially) mitigating the short channel imperfections (while still having a shorter channel; it's just tall now).

    But you cannot fight physics forever. Right now, we surround our channel with the gate on 3 sides (the bottom is still just sitting on top of the substrate). And yet, short channel effects are still a big concern, and continue to get worse as we shorten the channel. The next obvious step, indeed already attempted by semiconductor researchers, is to surround the channel on all 4 sides! Thus we have the next thing, the Gate-All-Around transistor. We'll likely need it around 4-3nm, otherwise - again, we'll end up with transistors that cannot be turned off once they're turned on.
    Sigs are obnoxious.

  15. #15
    Xtreme Addict
    Join Date
    Nov 2006
    Posts
    1,401
    Quote Originally Posted by iddqd View Post
    That's why they had to come up with finFET - with planar transistors, the gate leakage and sub-threshold leakage would be unreasonable. But in general, as you make the channel shorter, it is more vulnerable to the aptly-named short-channel problems. All channels suffer from imperfections around the corners (so, close to each depletion region); but with a long-enough channel it doesn't matter; the strong E-field imposed by the gate will keep things under control in the middle of the gate, and the imperfections around the edges don't really matter. As you shorten the channel, two things occur: (1) the E-field is weaker, as there's less gate now, and (2) the imperfection regions are brought closer together.

    As the channel becomes very short, the two imperfection regions start to overlap, which causes disastrous effects; eg: punch-through - at this point, you are leaking 100% of your current from source to drain, and the gate is unable to control the channel at all! The transistor no longer functions as a switch.

    FinFET attempts to deal with this problem by building the channel vertically; where it was previously a flat piece of doped silicon resting between two depletion regions, it now sticks out of the substrate vertically, with the gate wrapping around it. This achieves two things: (1) we once again have more gate surface area - which can put up a stronger E-field to better control the current, and (2) we bring as much of the channel as possible away from the troublesome depletion regions, thereby (partially) mitigating the short channel imperfections (while still having a shorter channel; it's just tall now).

    But you cannot fight physics forever.
    Right now, we surround our channel with the gate on 3 sides (the bottom is still just sitting on top of the substrate). And yet, short channel effects are still a big concern, and continue to get worse as we shorten the channel. The next obvious step, indeed already attempted by semiconductor researchers, is to surround the channel on all 4 sides! Thus we have the next thing, the Gate-All-Around transistor. We'll likely need it around 4-3nm, otherwise - again, we'll end up with transistors that cannot be turned off once they're turned on.
    Agree, 3D FET transistors was a large improvement, leakage was heavily reduced with that. The race is like FET current, quadradic improvements for years. After the improvements will be slooooow and slower, and definetly hard. The insulation of SIO2 (for gate) is already around 5 atoms thick for decades already. There is no way to improve this.

    Next computers may be DNA based or quantic, ... , not only transistors. And the improvement margins ares less and less on hardware. But large software optimisation is needed. A well optimised code can do a lot of work even on ?Controllers.

    4-5nm will be the last commercial node. And Maybe available in 2027-2030. (And few sample or limited quantity before).

  16. #16
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,956
    Quote Originally Posted by iddqd View Post
    That's why they had to come up with finFET - with planar transistors, the gate leakage and sub-threshold leakage would be unreasonable. But in general, as you make the channel shorter, it is more vulnerable to the aptly-named short-channel problems. All channels suffer from imperfections around the corners (so, close to each depletion region); but with a long-enough channel it doesn't matter; the strong E-field imposed by the gate will keep things under control in the middle of the gate, and the imperfections around the edges don't really matter. As you shorten the channel, two things occur: (1) the E-field is weaker, as there's less gate now, and (2) the imperfection regions are brought closer together.

    As the channel becomes very short, the two imperfection regions start to overlap, which causes disastrous effects; eg: punch-through - at this point, you are leaking 100% of your current from source to drain, and the gate is unable to control the channel at all! The transistor no longer functions as a switch.

    FinFET attempts to deal with this problem by building the channel vertically; where it was previously a flat piece of doped silicon resting between two depletion regions, it now sticks out of the substrate vertically, with the gate wrapping around it. This achieves two things: (1) we once again have more gate surface area - which can put up a stronger E-field to better control the current, and (2) we bring as much of the channel as possible away from the troublesome depletion regions, thereby (partially) mitigating the short channel imperfections (while still having a shorter channel; it's just tall now).

    But you cannot fight physics forever. Right now, we surround our channel with the gate on 3 sides (the bottom is still just sitting on top of the substrate). And yet, short channel effects are still a big concern, and continue to get worse as we shorten the channel. The next obvious step, indeed already attempted by semiconductor researchers, is to surround the channel on all 4 sides! Thus we have the next thing, the Gate-All-Around transistor. We'll likely need it around 4-3nm, otherwise - again, we'll end up with transistors that cannot be turned off once they're turned on.
    Isn't that verbatim what I said (regarding the gates themselves and dielectrics needing to be redesigned)? There's multiple people with graduate EE degrees here, you don't need to justify a simple mistake by trying show how smart you are
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  17. #17
    Xtreme Addict
    Join Date
    Dec 2004
    Location
    Flying through Space, with armoire, Armoire of INVINCIBILATAAAAY!
    Posts
    1,963
    Well, the point I was trying to make is that leakage always increases as channel length decreases. You can come up with more optimal transistor geometries, but that doesn't really change the fact that it'll still keep getting worse. I'm sorry I decided to back that up with facts.
    Sigs are obnoxious.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •