https://c8.staticflickr.com/1/422/31...38641e1016.jpg
stream will be at AMDTwitch in few minutes!
https://www.twitch.tv/amd
Or second link: New Horizon
Printable View
https://c8.staticflickr.com/1/422/31...38641e1016.jpg
stream will be at AMDTwitch in few minutes!
https://www.twitch.tv/amd
Or second link: New Horizon
Lisa Su seems like an energetic person
Can someone confirm the 6900k frequency in the Handbrake test?
AMD clearly was 5 seconds faster - but I thought it looked like the 6900k was running at just 3.2 GHz in task manager? Something just seemed fishy to me (especially because they covered up the Zen frequency).
Edit:
Re-watched that portion of the livestream. Definitely running at 3.2 GHz. On top of that, Su clearly re-iterated the Zen CPU wasn't boosting, but the clockspeed wasn't shown. As far as we know it was just running at a flat 4 GHz and that's why it was faster..... Really wish AMD wouldn't pull these tricks...
i7-6900K never running at base clock, because Intel Turboobost working in hard load with i7-6900K at 3.5 GHz.
But Im sure, Zen boost was also more than 3.4 GHz behind the render (maybe around the 3.7 GHz)
AliG, Lisa stated earlier that Ryzen runs at FLAT 3,4ghz no boost, and 6900K Runs at default speed ie 3.2ghz base 3.7ghz boost.
That means in blender it has almost the same performance as BDE (same speed, but Ryzen at 3,4 and BDE 3,2Ghz) and is faster in handbrake.
PEOPLE WITH 5960x and 6900K, set your Cpus at FLAT 3,4ghz and download blender 2.78 and test file from AMD page we can compare IPC properly then (we can assume 2400mhz DDR4) .
Mine 5820K (6 core haswell-e) just did that test at 4.2ghz in 55s, ryzen does it in 35s (along with 6900K)
So, i set mine 5820K to 4.53ghz, and redid the blender test, 51 secs, :-/ it gives me the same aggregate amount of jiggahurts as 3,4x8 =27.2 /6x4,53=27.2 .Something doesnt add up or my setup is wrong.
I see today's news includes a new name for amd's cpu.
Okdokey lol, I thought ryzon was spelled rizon or raizen, I think it came from yuyu hakusho lol.
If this was years ago, I would learn towards the idea that amd potentially has a one up on intel when it comes to tdp.
I would hope that I guess if I were going to buy the cpu.
I'm guessing this should at least bridge the diff in gaming, but the real ? is, does it clock to the mid 4 ghz range (24/7, turbo voltage) ?
If not, then maybe it's possible for them to continue to lose out even to intel's mainstream quad core's, let alone the next gen 6 cores (still with dual chan though).
That's what I want to know, do I recommend intel mainstream, or amd zen ver +... :shrug:
Edit:
People have been complaining of lack of info from amd.
Yet amd often does there little monthly (or whatever) stock boosts advertising spam whatever you wanna call it.
What I'm trying to say I guess, it's between overhyping and it's to quiet, there must be something wrong lol.
You can't win either way I guess.
And then there's the (insert random pantsuit lady comment) on how lisa su is so tech savy, or so business savy.
Don't get me wrong, I don't care lol, I just think it's funny how it pops up randomly in the topics on other sites from time to time when amd news is on the front page :).
Anyways the mem scores for the new socket type look horrible on the apu, I hope zen or whatever it's called... looks completely diff in that respect, 40k or go away... (not 12-13k geez...)
The 6900k should be able to sustain a boost clock of 3.5 GHz in both workloads demonstrated by AMD.
I'm talking about the Handbrake test where they actually showed the task manager. They not only intentionally covered up the Ryzen clockspeed (Lisa specifically said no boost, not that it was running stock), but you could visibly see the 6900k was running at 3.2 GHz - i.e. turbo was turned off.
Just seemed very fishy to me, and reminded me of their RX 480 demo where they used different settings for the Nvidia card.
you may be forgetting how turbo works. how we on XS use it and most other DIY users is not the same as stock. at stock that chip runs 1 core at 3.7ghz, 2 cores at 3.6ghz, and 3 or more cores at 3.4ghz. that puts a stock 6900k and a 3.4ghz SR amd part at the same frequency (3.4ghz) for the benches they ran.
the task manager also only shows the lowest speed core value and does not change with turbo unless you set the bios to all cores.
No I understand how turbo works. I also know my task manager shows the turbo speed on my 2500k.
All of those cores were fully loaded - it just seemed very odd to me to see 3.2 GHz and have them cover up the Zen speed.
do you not set turbo to all cores in the bios? if you set turbo to all cores (the normal default on diy boards) it sets the turbo multi is the top speed step and does not actually use turbo. the way it reports was a PITA when i was working retail and trying to explain it to people who complained when they saw it had a lower speed than the tag there, and then showed them what it did in cpuz
I get that, but if in the beginning Lisa said that ryzen runs at 3,4Ghz flat, then we know in handbrake it also runs at 3,4ghz.
Because if not, AMD would have a court case on their hands ...
Yes they did cover Clockspeed of zen, and there could be a million reasons for that on engineering hardware.My guess it shows wrong info.Just imagine if it showed 1,7ghz and screens of that would hit the web, like 90% of people would assume that clock is correct and AMD would have a disaster on their hands.
My prediction:
Ryzen may match Intel's performance but I doubt it will be a clear cut win in any sector except maybe performance/watt (and lower prices). For AMD to admit silicon is still be optimized just months before launch is a red flag for me and I have serious doubts these chips will clock well. The next revision of Ryzen will probably be very soon after initial launch and the release will see a lot of "what Ryzen should have been" reviews.
Two reasons for low score
1)15h family
2) no L3 cache and half size of L2, AIDA memroy somehow working with cache in memory subtest (compare example 15h chips with L3, 4MB L2 and 2MB L2 with same memory)
Example of stock IMC Carizzo
https://c7.staticflickr.com/2/1564/2...2525db69_z.jpg
stock Kaveri/Godavari
https://c3.staticflickr.com/1/466/19...90316776_z.jpg
Vishera FX has around 30k/20k/28k (read/write/copy) at 2133 MHz memory
Unless you want to make the explicit accusation that Lisa was lying about the test conditions, there is really nothing to debate here. She stated the test conditions during the stream before the tests started. The Zen processor was running at 3.4 GHz without dynamic clocking and the 6900K was running at its default specifications--3.2 GHz base clock with 3.7 GHz boost. That they covered part of the screen for a pre-release product demo isn't shocking, surprising, or anything else.
Lying? Nah, let's call it AMD PR speak shall we. They've only been doing that for as long as I can remember. :p:
I honestly will not believe anything AMD has to say in their PR spin until I see unbiased tests. I'm not entirely against AMD, only against the nonsense they continue to pull off and are getting away with because such a large amount of enthusiasts are fainting like teenage girls whenever AMD comes out with a paperlaunch or press event.
But hell to the yes if it's a home run for AMD. AMD still outnumbers Intel 7-4 in my builds since the late 90s!
True... But it will be tough to compete with Intel. Qualcom's 48 core chip was just released and it MIGHT be enough to keep their server shares alive but I kind of doubt it. If AMD wants to get a new foothold in the server market, they need a very successful launch now...
I don't know about that. There are plenty of AMD systems out there for sale and most people do not OC or even know what that is. They go to BestBuy/Newegg/Fry's etc and buy pre-built systems.
I was looking around at Laptops for a friend yesterday, so many AMD systems out there, same with Desktops.
The main selling point for most consumers is ... price. This is where AMD can shine big time.
The average consumer does not even know what cores/threads are and what more can do for you, not that it matters much as they do not fully use them.
It goes... how fast does my system boot ... does it take a long time for my favorite app to fire up ... is the internet slow ?
Most of these can be dealt with by using SSD's.
That's my thought. Remember what they did with the RX 480 demo? Sure the crossfire beat the 1080 single card, but they used different system configs and different in-game settings (I think notably medium vs high settings). I'm just saying, it was very peculiar to me that Lisa Su kept specifically saying "no boost", but they covered the clocks. Zen is guaranteed for 3.4 GHz+ - as far as we know that means the top SKU runs at 4 GHz.
And again, full disclosure: I own a non-trivial amount of AMD stock. I want nothing but them to do well.
AMD is claiming Zen not only overclocks well, but responds to internal temp and voltage sensors for a far more aggressive boost clock. Hard to say until we put it in the hands of average overclockers (I really don't care what it does on LN2).
I mentioned OEM systems and costs in my post so I'm right with you. Its just concerning for me that silicon optimizations are still ongoing. In the context of this forum I think a large chunk of us prefer a chip that can clock well, not just perform well at stock speeds. I think AMD will release this chip without fully refining the silicon, it won't clock well, and in this community, may not get received well. Again, I'm speculation, but I wouldn't be surprised if we see new chips based on a new revision before the end of 2017 with much better results. To this end, I'm not committed to buying a Ryzen at launch.
I haven't seen any claims from AMD on overclocking (not implying anything, just haven't seen it) but I have seen the boost concept which looks like it would work well (if the silicon would allow it). I also know what they claimed in the past so forgive me if I temper my expectations a bit. Considering the fact they are still revising silicon, this soon before launch, makes those claims sound very optimistic. That's all I'm saying.
We ABSOLUTELY do not know that, if anything from all the rumours and the presentation itself we can extrapolate that they ATM cant clock this stably higher.maybe im wrong, but im trying to be up to date, and have not seen anything to the contrary.
Lisa said that they are runnigng 3,4ghz flat because they "have not finished optimizations" and the clock WILL NOT BE LOWER than 3,4Ghz.But it only means they can run 3,4Ghz reliably and will attempt more.
Way to take what I said entirely out of context...
First you snip just half a line of my 2 paragraphs (I'm guessing just to troll), then you ignore the "as far as we know" part. That kind of implies we don't know anything...
Really wish XS still had the ignore button at times...
I omitted the part that wasnt a part of the thing i was replying to, there is nothing of substance missing, you HARDLY implied 4ghz clocks of the high end SKU "as far as we know" , when there is nada evidence for that and nobody KNOWS that.As for a weird flame attempt.I dont get why you do this but to each his own.
and yea, also that:
No.They are not.
And yes, i didnt quoted the whole post again for the sake of people out there that should not be forced to read 1 whole post 3 times.
Just for kicks and giggles, man, this is some BS kung fu right there :ROTF:Quote:
... "as far as we know" part. That kind of implies we don't know anything..."
Yup that is true about overclocking. People here are also more into that.
I will say this about my current feelings on OCing.
If you system is for gaming mostly OCing will not do much for you.
I run two systems here. One is a 5960x and the other is 2x E5-2699 v3's. But systems use ASUS boards, the 5960x system uses Auto OC in the BIOS.
I ran both systems with the same GPU... asus 1080 gtx strix for a test with 4K display.
In the games I tested I saw no real difference in FPS on either system. The biggest single improvement for your system if you game is the GPU. The OC hardly matters at all.
I would suspect OEM AMD systems will be paired with AMD GPU's, these should be beastly systems and if the price point is a lot lower than Intel/Nvidia that's what people will buy.
I also suspect shortly after AMD releases these new CPU's Intel and Nvidia will counter back with parts that will lay waste to AMD, but again at a higher price point.
Completely agree on the overclock. Unless you are buying a very low clocked/cheap CPU. But don't down play the "online cred" factor. Sure the top tiered product generally doesn't make up much of the market share but it does drive the sales for the mainstream market.
As has been the case since forever :p:
I know LOL
Its good to see AMD coming out with what appears to be a good processor tho, its about time :)
I'm really curious to see how it does in the power consumption realm. Last I recall, AMD and Intel measure TDP differently so we can't directly compare off the spec alone.
But it would be really cool if it did offer 6900k-ish performance with 30% less power. I think that alone would be a huge selling point to the server world.
really depends on the system and the games
at 4k most new games will be completely gpu bottlnecked to under 60fps without a titan or two
i see ~35% performance increase in arma3\fallout4 from overclocking ram and cpu in the 30-60fps range ~5-10% from 6700k@4.7ghz ~25-30% from ram at 3866c16
in other games the gtx 1070 sees a decent performance increase by overclocking to +100 +800 bringing it up close to reference gtx1080 performance level
in a lot of games this makes more difference than a stock 2600k vs a overclocked 6700k at 1080p let alone 4k
I would agree with you on the games played.
Mine usually are... DOOM, Path Of Exile, D3, Star Citizen, War Thunder, Civ Games. I saw no real difference between those game on each system posted. Nothing to write home about anyway.
lol
Skylake systems cant oc.
I guess AMD jumped the gun on this one, i watched the whole Ryzen intro video, i dont think they would lie or start a video stream like that, that way, if they could not deliver it, however now they have to deliver.
I liked all the features, the demos were lame though.
In fairness, they actually showed off more than usual. I still think they did a lot of hand waiving, but the amount of transparency with Zen compared to their previous major launches has been night and day. That alone gives me confidence it'll be a solid competitor.
i can't wait for zen's secret weapon to come out. the platform.
nvRAM non-volitile DRAM. means the future of computing is competely different from what we have today.
the APU's for workstations will have a shared cpu/gpu with same 1TB of nvRAM as found in the new workstation cards. but in a budget. the consumer will get the same techs by 2020. resulting in VR equal to the real work, with pre rendered imaging and the 60+GB/s coherhent fabric.
Will pre rendered aka pre processed, textures be idling in nvRAM after reboots. will level load times, turn to only data side with all gpu side pre rendered waiting in dram. will the same get applied to processor side.
I really can't wait to see what the PLATFORM, means for near future of software. non volite dram is the game changer coming with ZEN. which is why i think they said, oh and next year we'll be doing this show again. cause the nvRAM is not yet in the market. but next year we can show that off too. along with zen apu?
I love your post's dude, they give me a laugh sometimes :).
For 1, even if your rom storage was fast as ram, I'de hate to break it to ya but shader cache doesn't get reused..., it gets invalidated each time you ran the app again.
So even if you could store all the calc's and such for all your games, ahead of time, nvidia at the least won't use it (in linux I set it on a self expanding ram drive, it puts the shaders in there, and can eventually use them if you don't quit the app).
Also we're not waiting on a rom replacement for ram, not yet, that's a niche that's not that useful.
The next big thing will probably be a sort of l4 cache, either on dimm (like ibm) or on the cpu via hbm2 or intel's alt.
Probably for igp's 1st, obviously, for vga ram.
Maybe, maybe later, it'll mature into a sort of l4, or perhaps even onboard ram (where the cpu can boot without actual dimm's).
And I really hate to bust the vr bubble, but I honestly think it's a fad, other then business use (ship designing, intel agencies, commando's lol, etc, just misc whatever).
There was a vr fad back in the 90's, same thing.
The only thing you get out of it is the ability to look freely, I don't really care about huds in helmet's (xplane maybe or whatever).
3D is cool, but the 20+ multi view non-glasses 3d screens that were shown 2011-2012, never showed up for comsumers.
Reminds me of oled, I head of it probably 20 years ago, it was supposed to come up in a year lol, we're just now actually getting them in tv/monitor's, we had them for a little bit on some cell phones (mines got an amoled).
I agree about VR being a fad, but I do see promise with AR.
A lot of other industries have been working on integrating AR behind the scenes; all it takes is one major player and you shortly have full market penetration.
What Lisa actually said was, "Now, let's talk about frequency. All that speculation out there about frequency. Today I can tell you that our Ryzen processor at launch will have base clock speeds of 3.4 GHz or higher. Each Ryzen processor will also have a boost mode, and we're going to announce those boost frequencies at launch next quarter."
Later when the Blender demo started, she stated the test conditions which brought up frequency again. There, she stated, "So, we're going to start first with the Blender 3D modeling and rendering application. This is actually a great CPU test because it scales very, very well with cores and threads. And we have two demo systems. We have Ryzen which is running at 3.4 GHz, running without boost. And, we have the only other 8-core, 16-thread processor on the market, the Intel Core i7-6900K, running at its stock 3.2 GHz base and 3.7 GHz boost. No adjustments. Just straight out of the box. Everything else about these two systems is the same."
As far as I know, the only time she mentioned optimization not being complete was in the context of bragging about Zen's power to performance ratio against its Intel opponent in the demo. She said, "Ryzen at 3.4 GHz without boost actually matches the performance of the 6900K that currently lists for about 1100 dollars. What do you think of that? *crowd applauds* Ok, even better than that, the Ryzen part in this demo will ship at 95 watt TDP so our performance is matching the 140 watt TDP of 6900K stock with lower power before we have finished optimizing the performance. *crowd applauds*"
Yup.Yes, Thats what happened ;-) .Very precise. maybe im wrong, but i thought that power optimization can and likely will have an impact on the final clocks.
However i may add, there are so many unknowns at this point, we dont know what kind of work is still going on the platform.One thing to note is, we have 2/3 of december.And there is no info about motherboards. Cpus and/or the platform are not finished.Its doubtful this thing will launch in january :( .Realistic timeframe for me seems to me to be march.
Also turbo is still in the works, and that can mean few things, it can be some simple engineering problem, or it may be silicon dependent, or just some code to "massage".Either way.We dont know.
Any body here from the Conroe days, when Intel had a game changer there was lots of ES, lots of benchs before lunch, everybody knew that it will easily beat the best FX CPU by good margin with lower clocks and it will clock higher.
If AMD have a winner then we should have seen lots of ES by now, I can only think on another RX480 bubble
Yea, but do you remember how much of a surprise the first athlon was ?
They obviously have SOME problems with the platform, it should be here already.But this showcase they did, well , there were hard numbers there that cant be negated, its true 8 cores, performance is on par broadwell-E at least in these two benchmarks.So even if they have some problems with it, like clocks or power.Its not gonna be faildozer again for sure.
Also AMD is a tight ship in recent years, maybe its just because they dont have so many clients to deal with as intel.
Also RX480 is a decent chip, it was hype from end users that wasnt met.
RX480 is kinda similar to 290X, it was kinda :banana::banana::banana::banana:ty on start, but few months later and all is good.With RX480They just didnt compete in high end at all (which is a bad move for sure)
i believe vr will be more than a fad if they can improved the lenses enough to double the fov in a timely manner so its not likely running around looking through binoculars and aaa games start getting made with full support
although i would still be sitting down using kb\mouse with one
broadwell showed what even a slow l4 cache can do for cpu performance so with some luck it will be used in the future
if the debug chips with pins along the top aka protypes were using at first, 40% expected ips increase a ULV process now they're into the LV process with final silicon gonna have no debug, just errata INF's after the fact. if a code path optimization turns out broken in real world for certain applications of it.
with HP process about to happen why release of zen then, 3.4 ghz might not be final base clock nor top sku. with turbo optmizations in the final silicon envelop. with refreshes coming with a fin(finish)FE(iron)t cross, aka diffusion. probally the whole diffused in germany thing with revision chips.
Hi.