Do you think it is a yield issue or a potential one for the 8 core CCX chips? Maybe they are trying hoard them for epyc?
Printable View
Do you think it is a yield issue or a potential one for the 8 core CCX chips? Maybe they are trying hoard them for epyc?
6800xt 3ghz :scope::hrhr:
I doubt that is even a thing tbh.
typically speaking the dies before they even make it onto a pcb are binned and of course they bin down so anything we get on desktop did not make the cut for enterprise.
It's rather simple, AMD is the performance leader now so they can charge whatever they want. If anything i'm not complaining about 5800x price its more like I feel like they just priced 5900/5950 to low and it's going to steal market share from 5800x. The gap between 6-8-12-16 isn't very linear price wise so much so that I see no room for the rumored 10 core variant ( big grain of salt there ).
Anyway to break it down
5600 $300
5800 $450 2 cores cost $150 more
5900 $550 4 cores cost $100 more
5950 $800 4 cores cost $250 more
Basically the cost per core makes 0 sense whatsoever.
What would make alot more sense is
5600 $300
5800 $400 2 cores cost $100 more
5900 $600 4 cores cost $200 more
5950 $800 4 cores cost $200 more
Making every core pricing linear and worth $50 per core to AMD.
Current pricing the only chip that makes sense to buy is the 5900 as it's the best value by far.
Current pricing the 5800x is the least value by far.
I was looking at the GPU-z Pixel fillrate.
The 6800 Xt is showing 288.0 Gpixel/s While a RTX 3090 shows 189.8Gpixel/s That card isn't anywhere near that near fast. it's 7% slower on 1920 x 1090 10% slower on 2560 x 1440 and 15% slower on 3840 x 2160, this is based of Techpowerup's review
how the hell is does it have 50% more fill rate and end up slower? That can't be the rite fill rate for the card if it's consistently slower on every metric, and that's with out RTRT on.
I dunno in reviews I read it won in many things at 1080 1440 lost @ 4k.
Hands on with it just fooling around in some games and it's not a best case scenario with a 9900K the 6800xt is pretty damn fast. I've commonly seen this particular card running at 2.4+ auto
I have to say launch public drivers this time around are so far for me bug free.
https://media.discordapp.net/attachm...206&height=905
That is a nice looking build
Seems ok with out Raytracing.
it's a bit of conidium their in comparing to 2080 Ti which has less Overall Rasterization compared to RX 6800 XT But and First gen Raytracing like RX 6800 XT
Then trying to compare to RTX 30 series you got about the same Rasterization but second Gen Raytracing.
Looks like They just need more Ray accelerators on it over all ( techpower list 6800 Xt as having 60 ray accelerators when I'm pretty sure it has 72 ray accelerators) The Ray Accelerators seem to be part of the CU's
is that based of simple math only?
Or some kind unique code?
I concur. I like that case, but just isn't big enough for me lol
Just wish I could find a case with a fan hole or fan mount for a fan behind the motherboard.
I need some help to figure out my code 97. I installed my new crosshair viii last weekend everything was working. Played Apex legends for one hour all settings stock on my sig computer. After shutting down to have dinner with my family, my pc will not boot up shows code 97. I pulled out a msi 970 from my son's pc and it worked on my pc. Installed my Radeon VII in his black screen. Is there any way to get the said graphics card working again. It's less than a year old.
Here's an idea of how much power this card has and keep in mind it's on a rather outdated stock running 9900K platform. With those mins you can easily lock fps at 144/144hz and have a 100% consistent gaming experience with 0 screen tearing.
PS for reference the 5700/xt are in the 120's avg @ 1080P on the same system.
For AMD to calculate performance values you have; ROPs * clock speed for the pixel fill rate, texture fill rate is physical shaders (TMU) *clock speed, and flops are logical shaders*clock speed*2. They do not take into account IPC or efficiency.
Nvidia has changed their core structure a few times since they went to unified shaders, and they renamed their shader count to logical shaders like AMD does when they locked the clock ratio at 2x core frequency.
Ok so pretty simple
taken IPC deficiency, because scaling can't be prefect then putting it at about 60-65% of that pixel fill rate seem about right.
so what's the Nvidia Math that seems more accurate ? if you have any idea....
I've killed one card with a metal fan clip on the back of it over the gpu die area while it was plugged in :/
I might have killed another similarly, power supply didn't thoroughly dissipate it's charge after being unplugged. Battery to the board fell on it. landed in the same spot as the fan clip.
I'm disappoint in the, PC enthusiast, PC gamer's "PC master race" as whole lately just impatient and the unwilling to wait for stock to fill up, but ok with buying things as soon as they release.
people where ok with buying $1,200 RTX 2080 TI's when it's MSRP price was what $800 now they want to complain about it during a world wide a pandemic. :( after production was stopped for a few months.
All you have to do wait three months and not buy anything, stock volume will pick up by then.
They are the same thing now. In the past Nvidia had a shader clock instead of logical cores so you would multiply the shaders by the shader clock (not core clock.) The ROPs and TMU are the same math. The big change is that NV more physical shaders that can do less threads (not a great analogy but it helps) so amd is counting bulldozer style to get higher flops that games wont utilize. With amd RDNA has fixed some of it but only the main shader in a unit can do things like schedule out of order operations. AMD also has a bottleneck getting data between the ROPs and the shader clusters that Nvidia does not have. Since most games are made a favor NV due to their larger market share it make the IPC look bad for AMD. On properly coded/optimized games that make full use of asynchronous compute the bottleneck is much less.
I gave all the cpu's a quick run over the past couple days. No outliers imo. avg or below avg at best. I'll dig in more over next few days.
One of the things I am interested in testing is 5950x 16 cores smt off versus 5800x in games to see if there's any uplift to be had there.
"I'm disappoint in the, PC enthusiast, PC gamer's "PC master race" as whole lately just impatient and the unwilling to wait for stock to fill up, but ok with buying things as soon as they release.
people where ok with buying $1,200 RTX 2080 TI's when it's MSRP price was what $800 now they want to complain about it during a world wide a pandemic. after production was stopped for a few months.
All you have to do wait three months and not buy anything, stock volume will pick up by then."
I agree, I got lucky with my case and artic freezer II 280 and patiently waited till in stock and not scalper pricing but i'm fairly certain i just got lucky. Otherwise i wait. Me first usually comes with bugs....
Shader clocks are linked to core clock on Nvidia now aren't they ? I remember the shader clock differences vs core.
lol I almost wanted to call it the bulldozer of gpu's expect it actually has a performance increase. Just not great Ray tracing yet, still waiting on that "Super Resolution" (the FOMO on DLSS is retarded along with this Raytracing FOMO)
So what's making the bottleneck on the ROP's ? I can't find anything on it the Architecture it's all about the CU's shaders and infinity cache and Ray Accelerators. I was thinking the scheduler would be the bottleneck there ?
I was thinking AMD could add more separate scheduler to both the ROP's and the Ray Accelerators. However waiting to synchronize all that could be a nightmare. Some sort of bypass to the Ray accelerators in the setup now seems more logical and quicker, maybe that's why it's out of order execution though too.
The scheduler works with the ROPs directly, and in the past the ROPs could not do out of order and had to be scheduled in clusters. One of the big reasons that AMD chips did not scale well when they got larger (vega/fiji) was the way the rops worked. It required too much latency so they were bad at high framerates. They had it detailed in the RDNA2 white paper. There was coverage but all I am finding now is about the rx6800 and not the rdna2 reveal.
Got a 3900X open box at Micro Center last month for $340. Not a great chip but stable at 4450/4300 CCD1/2 at 1.306v, 1900 FCLK 1.12v SoC with a 360mm MSI AIO. CCX0/1 are equal and CCX2/3 are equal, can't get 25 MHz higher on any of them. Runs like 4525 in single thread loads stock / with PBO on so OC was worth it, the ST performance is basically the same. I guess I could have tried RMA for not hitting boost at all but :shrug:
Posted on reddit because I was bored and some people went nuts and called me an idiot / CPU murderer for using 1.3v so I just deleted the thread and moved on.
Still waiting on ASUS reference 6800XT to ship, ordered from ProVantage on 25th, no estimated ship date. Might end up trying to snag a 3070 or 3080 instead if I don't get a ship date soon.
Thank mate for the info:up: I rma the radeon Vii and am still waiting for the for XFX to resolve the issue.
Well ordered a MSI Radeon RX 6900 XT today was lucky to get hold of one at alternate.de.
Can?t wait to get my hands on it:D My boy will get the Radeon VII if xfx repairs it.
the PBO for single core on 3000 is highly dependent on cooling solution and highly dependent on application. CB single for example don't expect miracles. even 32m pi don't expect miracles. 8m pi yes you can catch the speeds in Hardware info with the refresh speed set to the fastest setting and sometimes catch in in AOD.
With 5000 series chips you don't need voodoo magic to catch peak PBO clock speeds in monitoring programs. They are more than happy to boost to those speeds quite often.
I haven't spent a ton of time yet on the hardware but so far i see a trend with temps vs RT on off. example godfall/dirt 5 run RT and I'm seeing 80c avg where as in pure rasterization i see 74c avg temps.
^ Another reason to dislike RT :D
I'm on a 360mm AIO with decent temps, idle 29c, ~50-55c ST load. Batch 2018 PGT (after XT launch chips) so was hoping for a little better - CHVI Hero latest BIOS. Might go 5000 series later, not sure if it's worth the investment in a new board for me.
CB15/20/23 ST moves between 4450-4525... might see what Pi does after work. Didn't really have a light enough ST workload to get it any higher. Top 4 CPPC cores are all happy to hit 4525 stock, I set hwinfo to update @ 100ms but have never seen higher. It gets a little worse with PBO on for some reason, a lot more like 4450-4500.
I tested up to 4500 on CCD 1 @ 1.34v load voltage but that's too high for more than a CB20/CB23 run.
I wonder how power usage is with RT on, genuinely curious since no reviewer appeared to test perf/w with RT
Pre XT maybe like 3 months prior might have had best batches of 3900 chips. you got to figure they were cherry picking them for XT variants so the non XT's would be the duds after XT launch.
Seems to be about the same just temps go up about 5-6c in dirt 5 and godfall. Dirt 5 is decently optimized can run max setting 1440P np, godfall not quite as well optimized epic @ 1440P is not really playable but stepping down a notch is perfectly fine.
I found something nice.
https://www.techspot.com/article/215...-vs-amd-rdna2/
Ok now I see why RDNA 2 is lagging a bit in RT.
I tossed a action sequence together in cyberpunk 2077 with 6800XT still not best case scenario just giving the card a workout in various games.
No jerking around just 1440P High preset card @ defaults.
https://streamable.com/0ru98x
I honestly think Godfall is more taxing on epic than cyberpunk2077 on ultra but keep in mind RT is on in Godfall.
Here's a godfall action sequence 1440P high.
https://streamable.com/jwz2a9
Keep in mind i'm recording with MS live so vid quality is not the best then streamable cuts quality even more.
Will be capturing with an elgato 4K 60 card as soon as i set it all up.
AMDs instinct card looks interesting
HBM returns
http://www.xtremesystems.org/forums/...6&d=1607850238
It's possible for them to release a HBM desktop card, it's also highly improbable that they will unless it gives them some type of edge versus nvidia's answer to 6800xt's value/performance and 6900xt's price point vs 3090.
It's also highly probable they already tested that config and it did not net them the gains they needed or wanted and or effected there bottom line as far as what they make per card vs the performance.
The 6900 is one hell of a card. The ref. Card is well designed. With the fans at 50 percent i could not get it to exceed 61?c while playing forza Hori.4 at 1440 ultra settings. There is no need to watercool the ref. Cards. If you play games with light oc. But when shooting for records. In benchmarks, one can put it under water or whatever
DISCLAIMER
I may list voltages in these posts that may or may not be safe........I would not take my advice as far as voltages being safe as I'm just turning knobs to see what makes thing stable or achieves XXXX speed.
I would definitely take TheStilts advice as far as safe voltages go as he has more insider info than I do
Just a heads up and it's not really anything bad just saying don't don't don't believe the hype.
yes you can do 2000 fabric, this chip can do up to 2066 board dependant but not without cpu bus/interconnect errors @ anything over 1900.
Might be board specific might be cpu. Needs further testing. I've found ways to tune it away but not in real stability tests.
Older variant x570 or at least ch8 hero wifi can also do 2000 but require around 2.1v pll to get around the 07 post code issue.
https://cdn.discordapp.com/attachmen...terconnect.jpg
Example of ch8 hero wifi doing 2k with 2.1 pll volts.
https://media.discordapp.net/attachm...606&height=904
I'm curious why people think Infinity Fabric is a bottleneck. Surely if it was truly a bottleneck you wouldn't have gotten 19% increase in IPC out the new 5,000 series CPU's. It would have been nothing if that was true.
HMM i guess so. Let me toss this up first and let prime run while i sleep to verify the below is a reality stability wise.
I experimented with Fmax scalar by stilt in bios, experimented with PBO alone then experimented with PBO curve optimizer.
By far the curve optimizer works hands down the best.
https://cdn.discordapp.com/attachmen...15_pbo_100.jpg
https://cdn.discordapp.com/attachmen..._pbo_100_2.jpg
Have learned some about Zen 2's quirks tonight...probably should have picked one of these up a long time ago.
PBO is not useless but with some settings it seems useless / will perform worse than stock, especially when set too high. It appears there are limits in AGESA/SMU regardless of what is actually set with PBO to protect the chip from exploding.
The SMU / FIT will only ever allow it to hit ideal TDC/EDC/PPT numbers and certain settings skew it slightly worse even if not tanking it, so it takes lots of testing, combined with offset undervolt.
PBO Scalar actually does something, but raises the Core VID to unsafe voltages (1.41-1.45v+ all core) past ~4x, but can be mitigated with heavy offset undervolt, increasing MT performance significantly but at the risk of unstable ST due to max VID the chip ever chooses being 1.475-1.50v = low actual ST voltage. Had a hard time booting to windows or even posting with too much undervolt or combination of undervolt and PBO settings the SMU didn't like, - ie. Auto OC @ -0.125v offset and +75 MHz fine, but even -0.05v offset and +100, or +150 still completely unstable...or set PPT to 200w+ after -0.125v undervolt and also unstable, it is changing something with single core / light load boost aggressiveness and breaking everything.
More vdroop with PBO is better, any LLC is bad - it's possible to push to the limit of stability if undervolted as much as possible with PBO using a high scalar number, the chip will just keep going for roughly the same TDC/EDC/PPT targets and adjust it's MT Core VID to the moon.
All core OC basically useless, CCX OC not as useless, but depends how ballsy you want to be with voltage and what you are doing - CCX OC is okay if just gaming / no heavy all core loads. Example this CPU does 4.3 all core or 4.45/4.3 CCX OC @ 1.31v load - this is too high voltage for P95 Small FFT but fine for almost everything else (Blend/Cinebench/AIDA stress loop forever, etc.) but hits 75-85c on 360mm AIO in those. 4.4/4.25 can be had at 1.27v actual.
CPPC on, started at TDC 300a/EDC 300a/PPT 300w Scalar 10x, -0.05v undervolt, worked down in 20a/20a/20w increments until I gained a little more performance at TDC 180a/EDC 180a/PPT 180w.
P95 FFT 128k after 2 minutes
1.38v VID 1.23v actual - temp 75c
Clock - 3975
TDC - 121a
EDC - 157a
PPT - 173w (this was lower ~165-170w @ 200-280w PPT set)
Continued undervolting with P95 128K FFT running using ASUS AI Suite until I lost a worker thread in P95 @ -0.1375v...settled on -0.125v.
PBO TDC 180a/EDC 180a/PPT 180w, Scalar 10x, -0.125v offset, Auto OC +75MHz
P95 FFT 128K
1.41v VID 1.2v actual - temp 76c
Clock - 4100
TDC - 121a
EDC - 156a
PPT - 173w
CB R20
1.45v VID 1.25v actual - temp 68c
Clock - 4150
TDC - 95a
EDC - 161a
PPT - 150w
PBO TDC 130a/EDC 170a/PPT 180w, Scalar 10x, -0.125v offset, Auto OC +75MHz
P95 FFT 128k
1.41v VID 1.2v actual - temp 76c
Clock - 4100
TDC - 119a
EDC - 157a
PPT - 174w
CB R20 - 7433 score
1.45v VID 1.25v actual - temp 68c
Clock - 4175
TDC - 97a
EDC - 163a
PPT - 154w
PBO - TDC 130a/EDC 160a/PPT 180w, Scalar 10x, -0.125v offset, Auto OC +75MHz
P95 FFT 128k
1.4v VID 1.19v actual - temp 75c
Clock - 4100
TDC - 116a
EDC - 153a
PPT - 168w
CB R20 - 7497 score
1.44-1.45v VID 1.24-1.25v actual - temp 66c
Clock - 4200
TDC - 96a
EDC - 160a (capped)
PPT - 153w
CB R20 ST - score 527
Clock - 4500-4600 - temp 50c
1.469-1.5v VID, 1.35-1.375v actual
PPT - 45w
I've now seen peak ST frequency of 4650 with these settings as well outside of R20.
My max scores in R20 -
7867 MT / 518 ST with CCX OC @ 4450/4300 1.306v actual
7497 MT (-4.7%) / 527 (+1.7%) @ 4200 MT 1.25v actual and 4500-4600 ST 1.35-1.375v actual.
The PBO boost is even better with lighter loads ... only ~50-100 MHz off from manual CCX OC. Considering my temps are much lower and the chip will probably degrade less this way, I think I'll stay with PBO and -0.125v w/ 10x scalar.
12-thread CB20 = 4250 1.28v actual
8-thread CB20 = 4300-4325 1.3v actual
6-thread CB20 = 4300-4425 1.32v actual
4-thread CB20 = 4400-4450 1.34v actual
https://i.imgur.com/qa664r0.png
Something to note in this screenshot is the power reporting deviation - the PPT number is not correct under load, but the since the SMU is always hunting for a similar max, these PBO settings happen to skew the power reporting deviation even more than usual resulting in the higher boosting behavior with PBO and the ridiculous VIDs that would be nearing 1.35v actual voltage all core with no offset. With these settings + the voltage offset, the CPU is always sitting higher up in the AVFS curve than it should be, at a lower actual voltage than expected, while being told it is sucking less power from the VRM than actual.
ill give you an idea of how drastic the difference is between the cpu's. I'm just using PBO/curve optimizer as there is no point in manual ocing.
Bios bug forcing 3733 fabric currently on 5900x. 3800+ post all the way to 4066 but 3800 has a clock hole apparently.
https://cdn.discordapp.com/attachmen...14/5900x_1.jpg
Here's the curve optimizer + PBO 100 on the 5800x through prime 95.
Note the avg clock speed and the peaks..... Hwi opened after prime started running to log clock speeds.
https://cdn.discordapp.com/attachmen...e_optimzer.jpg
Clock speeds on the new 5000 series with PBO + Auto OC are insane - biggest thing is it seems like they are way easier to OC that way and looks like the MT clocks hold up to whatever you could do manually? 3000 series left some on the table for both ST and MT.
Even though I got a good result for the most part, the fact that turning PBO on initially loses performance without mega tweaking on 3000 kind of sucks. I believe this CPU is capable of 4750 on the two best cores, but the algorithm doesn't go there and with the PBO scalar used to claw back MT performance with the offset voltage, ST voltages are too low to really get above 4600 anyway. Can't have it all on these like Zen 3 does it.
chip can manually OC to 4.5 prime stable. thermal shutdown above that. I can live with an avg clock of 4.5 through prime considering it clocks even higher with lesser demanding iterations. Primes not realistic but it's important to verify stability. It means on avg it will clock even higher when say gaming.
Fed ex just woke me up so excuse my bed hair reflection ( i work nights ). As the gamers tend to say nowadays. LETS GO!!!!!!!!!!!!
https://media.discordapp.net/attachm...205&height=904
Now I need to install that over here in this system swap PSU out to something more tolerable to my ears and get it all in a case for real world results. Hmm maybe wall mount monitor while I'm at it to give me a little more room to work.
https://cdn.discordapp.com/attachmen...424649_HDR.jpg
does multi gpu even work anymore? not like its needed anyway i already tested that to be good for 4K60. I know godfall has RT on ( amd title ) as well as dirt 5 ( amd title ).
That monitor is also HDR and HDR is not 100% free I found that out working on the helium pc with a 2080Ti. Seems it's not 100% free on AMD either so I'll probably do an on/off comparison at some point for that.
Corsair PSU inbound something a little more real world wattage wise and certainly more real world volume wise, Big thanks to Corsair and ASUS in helping get this show on the road.
Hi
Ryzen 5000 FTW! Amazing IPC performance and best price but hard to mem OC :D
I have heard AMD RX6000 has lower image quality in games! is that so?!
vs 5xxx? I see no visible difference, maybe it would require screenshot comparisons on same drivers. I like my eye candy though and have a high attention to detail and have not noticed anything "off".
PSU be here monday. It's only an ax850 but that fits the criteria for real world as most won't need more than that anyway and it should already be more than enough power even with ambient "total platform tuning" as I prefer to call it vs overclocking now.
Arctic liquid freezer II 280 showed up ( my dime ) also picked up an open box ITX ASUS ROG X570-I open box to play with ( my dime ). Wall Mount came in for monitor ( my dime ) A retail 5800x ( my dime ) is on the way to bin as well although i think the chip that really need replacing may be the 5900x. It would be nice to find a 5.0+ capable 5800x though.
Something is arriving from EK on tuesday so lets add them to the special thanks list. I honestly have no clue what it is ( third party via AMD )
May change up the hard drive config but undecided yet as I don't want to run 2 m2's and starve bandwidth on one of the x16 slots.
Things left to get are the phanteks p500a case ( black non rgb or rgb or white rgb doesn't matter it's just for airflow/realworld rgb looks cool in moderation but does not cool ) and a 5 pack of p14 artic fans. I may add one set of memory to the list but i'm not quite sure it's needed except to simplify side by side as this set of 32gb 16-16-16 3600 can pull the 3600 14-15-15 @ 1.45 spec just fine as well as up to 2000 16-16-16 @ 1.45
I've come to terms with the fact that CH6 Hero is probably not the ideal motherboard for overclocking memory being a T-topology board and the VTTDDR voltage is adjusted in 0.0066v steps while VDIMM is adjusted in 0.005v steps. Not sure who thought that was a bright idea. Basically only 1.325, 1.35v, 1.380v, 1.4v, 1.425v are functional at all at 3600 MHz+, and more VDIMM above 1.35v is just progressively worse no matter what I try.
I can pass 75% coverage HCI memtest at DDR4-3800 16-16-16-32-48 1T tRFC 352 1.35v VDIMM/.6732 VTTDDR, could do 70% even up to 1919/3838 (101.0 BCLK) but can't actually get it stable whatsoever because any choice of higher VDIMM and VTTDDR is less stable / fail between 2-35% coverage.
CLDO_VDDP is happy at 0.80, 0.90, 1.0...all perform exactly the same and slightly less happy at 0.85, 0.95, everything else is a memory hole @ 1900, nothing changes from 0.80 to 1.0.
CLDO_VDDG IOD needs 0.90/1.00/1.05v for 1900, can run CLDO_VDDG CCD at 0.80, 0.90, 1.0, or 1.05. Anything else +- 15mv creates a memory hole for all of these voltages and help really from going 0.90-1.05v except linking the two at 1.05v is marginally better. 1.1v with 1.15v SOC is worse.
SoC voltage likes 1.10v with LLC 1-2-3 but don't really see any difference between them with vdroop anyways. Boots 1.15v but worse, anything above is no POST. Anything except 1/1.05/1.1/1.15v creates memory hole so I have four separate voltages to balance that all create memory holes if set 0.0066v too high or low :clap:
I can drop down to 3600/3666/3733 dividers and have the same issue.
Can't do GDM off at all, can boot 3800 CL14 but unstable even up to 1.55v, can't even get 3600 CL14 stable. Tried screwing with setup times and drive strengths, basically changed nothing.
I've given up on trying to get 3800 CL16 working at this point and guess I will try to drop down to 3733 16-16-16-32 1.35v since the VTTDDR/VDIMM mismatch is killing everything.
I'm pretty sure if I plopped this CPU in a new board it would do 1900 all day.
not necessarily. It's very board and chip dependent. my master v1.0 did 3800 2x8 , ch8 3733 same chip memory. taichi 3800 till later agesa's then 3733.
Bios and agesa matter to. pre 3003 bios and on hero i was doing 1900 NP with my 5900x and now i'm not.
Interesting.
My board, RAM, CPU and I had a talk and decided to do the opposite of the rest of the internet. Apparently my IOD is special. Failed at 95% coverage this time, so making progress. I couldn't get 3666/3733 stable either so I started working on 3800 again.
No matter what I set the board won't complete DRAM training at POST above 3800 even at 1:2 FCLK so I assume the T-Topology has something to do with that vs. every other board on the market being daisy chain idk. Tried every VDIMM boot voltage between 1.3-1.5v
SoC 1.000v LLC 1 (0.981 w/ vdroop)
CLDO_VDDP 0.800v
CLDO_VDDG 0.815v CCD/IOD
VDIMM 1.380v
The difference between this and not stable at all is razor thin. Gonna keep going lower on VDDG and VDDP in 0.005v steps, if I can eventually pass 400% I'm going to call it good enough. SoC power dropped from 25w to 17w under load which should help PBO boost in the end as well.
https://i.imgur.com/Kh5vpDq.png
07 post code = fclk wall which the vast majority of chips hit @ 3800 or 3866 nothing to due with T topology it's the typical fabric limitation. drop the 1/1 ratio and you can do 5000+ memory.
PBO is usually almost always limited by EDC which if i run prime always bangs 100% of 200 A while TDC and PPT are usually in the 50-60% range, at least that's the case when i run the 16 core chips.
I've got the liquid freezer II 280 installed now, trying to close the clock gap on min max avg and shift it higher on 5950x now.
testmem5 works with anta extreme it's harsher than hci and faster. Found errors with it that even 2000% coverage in hci did not.
Case is ordered ( ended up with black P500a non rgb as it's what became available ) , just need to order a 5 pack of p14 fans probably later on today.
https://media.discordapp.net/attachm...678&height=903
This is just a baseline run with PBO +100 and manual set current/watts no curve optimizer yet
https://media.discordapp.net/attachm...606&height=904
Tried 1/2 FCLK and still couldn't train DRAM between 3866-4800 CL18-20.
This post by elmor kind of confirms what I'm seeing. I believe it is a board limitation, he was only able to hit 3666 stable 2x8GB w/ B-die: https://www.overclock.net/threads/ry...-x570.1728878/
I'll look into other memory tests. Passed 3600 CL16 1.35v 1800% HCI as a sanity check last night, 3666+ fail all around the same point 70-100% HCI coverage. Now working on tuning 3600 CL14 since I can't get the higher dividers stable.
Chew* do you meant it can vary just by board by board, Like same exact board, just some work better than others?
or is it a combination cpu's and boards liking each other ?
I think it boils down to board tuning trace and bios wise. The board I was referring to that pulled 3800 was a aorus master v1.0 with 2x8gb it pulled 3800 on every chip i tested 100% stable.
Swap to DR and it was not stable at all but could boot. then they revised board like 2 more times. makes you go hmmm..
Asrock on older agesas also pulled 3800 then later 3733. the chviii pulled 3733 from launch bios to current with the same 3000 series cpu. I tested every version.....just to be sure.
Both the Asrock and the ASUS were stable with 3733 DR setups as well though.
So i guess it's a combination of alot of things. If you sacrifice something I guess you can get certain things stable or you can tune the board for a majority of configs while maybe losing something.
I gave the gigabyte away for the overall compatibility with SR and DR on CH8 as it had more options available that I could manually set if I needed even though it hit lower speeds.
Right now I can't test vendor to vendor variance as far as board tuning goes as I have 3 ASUS boards all fairly similar well except the 570 I but that should run similar to impact and I wouldn't exceed over an 8 core part in it when testing say prime.
I might pick up an Msi b550 unify X for my vendor to vendor cpu's behave differently testing but i'm not 100% sure I will as I've been waiting to see what other boards might be built electrically for 5000 series.
I've boiled down my issues to VTTDDR mismatch with VDIMM. This board does not allow the two to match up in a way that most VDIMM settings actually work. Was tipped off by this post at overclock.net https://www.overclock.net/threads/ro.../post-26547301
3600 CL14-15-14-xx boots at 1.4-1.425v but unstable. The RAM itself needs higher than this to be stable. 1.45v is close to stable but again fails between 60-150% in HCI even at 3600 CL16-16-16 with high tRFC when 1.35v passed. Temps on each stick are below 50c when failing but I think heat is working against me as well when on the edge. Everything in between 1.425-1.45v is less stable and everything from 1.455-1.525v is less stable yet, but gets marginally better or worse depending on VTTDDR selection.
1.35v is my best bet, so back to CL16 and working out how high I can go or if I need to stay at 3600 with loose secondaries. Another thing for me to look at is BCLK, I can do exactly 101 before I lose boost but I have an audio latency issue at that freq.
I might end up dumping this board and picking up an X570 board in the next few weeks but I'm not sure if it's worth forking over the cash.
I'm going to try CL18 at 3800 with loose timings and low VDIMM to see if I can get that stable for kicks. Might be able to swing 3800 16-17-16-xx at 1.35v if that is the case. This kit has a little more headroom with tRCD = tCL+1.
Passed 1000% HCI last night at 1900 FCLK. Looks like the IOD / mem controller on this CPU are actually excellent, these voltages are the lowest I've seen. I guess somewhat luckily, stability for SOC/VDDP/VDDG influenced results even with bad VTTDDR so I settled on the same voltages for those as before.
DDR4-3800 16-17-16-32-48 tRFC 342 (180ns) GDM on @ 1.360v manual VTT set 0.6798
1.000v SoC LLC 1 (no LLC)
0.800 CLDO_VDDP
0.815 CLDO_VDDG CCD and IOD
Triple confirmed at DDR4-3600 16-16-16-32-48 that 1.35v and 1.36v VDIMM are the only voltages I can actually use because of the VTTDDR steps.
Kind of infuriating but then again this board was designed around CPUs that could only do 3200-3466 anyway I guess. My 1600X couldn't do more than DDR4-2933 stable at any SoC voltage on this board and now I'm thinking the VTTDDR issue might have been why.
Hand tuned subs. Might try to tighten these up further but not much left to gain. Will see what affect going lower on tRCDWR, tRDRDSCL, tWRWRSCL, and tCWL has on performance and maybe try to bring tRFC down a little more.
Moving to TestMem5 with anta777 extreme preset.
https://i.imgur.com/KMgqPgG.png
Passed 3 cycles of TM5. Might change config to 10-20 and test overnight. Gonna see if I can get latency down further, unfortunately a bit limited at max 1.36v :am:
https://i.imgur.com/Xis0pRz.png
https://i.imgur.com/ssj8clI.png
try pulling in your trfc your really loose for 2x8, I'm running 304 with 2x16gb you should be able to do 298 or less easy.
Yep had it loose while I worked down the rest just to be safe. Brought it down to 304 last night. Tried 285 for 150ns, won't POST but I can try something in between.
Dropped tRCDWR to 8 and tCWL to 14 as well which bumped writes and copy bandwidth significantly, and latency is now under 64ns. tCWL 10/12 was no POST and tRDRDSCL/tWRWRSCL couldn't get below 4 - these both run 2 easy with more voltage but I'm limited to 1.36v
Was seeing weird behavior in Cinebench R20 I assume was clock stretching(???) at 1.00v SoC and a little at 1.05v so bumped it to 1.10v. Score went from 7050-7250 back to expected 7450-7500 with PBO same clocks in HWInfo.
My copy bandwidth is still super low for dual CCD CPU but must be a product of board/AGESA... going to try earlier BIOS version, low copy bandwidth happens at every divider and timings. Scores look good for CL16 otherwise
https://i.imgur.com/dkG7AbN.png
https://i.imgur.com/jFKMfu2.png
Turbocool Leaf blower 1200 finally removed from the test rig. Special thx to corsair and my contact there. You know who you are.
https://media.discordapp.net/attachm...205&height=904
Got some EK stuff to test with the new AMD hardware. Maybe we can get rid of that cpu overtemp issue that I got rid of on everything but the 5950x with the liquid freezer 280. 5950x is just to much for that AIO it seems.
My ears are totally thanking corsair right now. It was so loud in here with the dinosaur power supply I couldn't think straight. The loudest thing in this room now is the rather dated version of the AX1200 in my gaming PC and thats barely audible.
https://streamable.com/zpxt5p
Those come in copper only
https://www.ekwb.com/shop/ek-quantum...d-copper-plexi
Most things come in copper only as an option, but I would get acetal (delrin) and nickle on anything new I buy. Cleaning copper is such a pain in the ass and the cost is basically the same.
I like full copper my self. yeah it is a pain to clean. Something rough to take off oxidization helps, fine sand paper for base plates. I forgot what my father told me to clean the internals with a while a back I thought it was a write brush wheel maybe or something. :-/
vinegar. anything acidic will make it shine when pumped through it.
Test rig is mocked up and in a case now. Still need to "clean" things up a bit (braided cables/accent lighting) and get the proper fans in it when it all arrives.
https://media.discordapp.net/attachm...205&height=904
https://images-ext-2.discordapp.net/..._160418667.jpg
clean setup:up:
I am still using my old Coolermaster comos S.
I will be happy to upgrade to a case with at least one optical bay and option for a 360 rad at the top of the case.
i can?t wait anymore for 5900x so placed an order for Ryzen 7 5800x as upgrade for my aging 1700x.
I will be glad if someone can point me towards a decent pc case Without RGB and thanks in advance.
This fits a 360 front and top but does not fit the optical drive criteria. Most modern cases won't fit that criteria.
Took me a while to finally ween myself off my blue ray player and ditch optical bays. You can can always go the external route as that's what most people who still need one do.
Cooling wise I think you will like the 5800x. Easy to tame temps and maximize clocks. Price wise ehh I've already explained my personal opinion on that and its solely based on the rest of the product stack and those products pricing.
My cooling requirement breakdown is this
5600x 240mm aio
5800x 280/360mm aio
5900x custom loop 360mm min
5950x custom loop 360+
This is of course if you want to get the max potential out of the cpu's and just my personal opinion having used them.
Case does look like it has a tad to much room but room to grow is always a criteria. I'll give you idea where this is going.
Keep in mind it's just a test rig so It needs to fit and accommodate a large range of configurations.
https://streamable.com/pdqi1h
https://media.discordapp.net/attachm...678&height=903
i already have a custom 360 rad loop just for cpu cooling, running in my sig system.
it is hard to let go or discard my bluray writer/burner. I had to settle for the 5800x after reading about thermal problems on the 5950x and 5900x. I will be playing cold war and apex legends on the rig. i woun?t be streaming and no content creation.
Thanks once more for you advice. I will still be on the look out for a case.
I specifically could not clean the skived portions of my EK evo supremacy so it is not useless as the fins are clogged, but anything I do now will bend or destroy them. The nickle ones you can use some vinegar then a pressure washer or sonic cleaning bath to clean. I am going to try a sonic cleaner on the copper block but dont have high hopes.
electrolysis is another non-abrasive option
I'm in good mood being a Clevelander the Browns beat the Steelers in a playoff game. :D yay now the got climb mount Everest against the chiefs. I don['t even care if they lose that now lol
Go Ohio State too.
Seeing these BIG boards WRX80 show up for Threadripper is getting me excited too, along with AMD C.E.S post tomorrow at 11:00am it does have me wondering if they're is like 16 core on now old TRX40 just for the memory bandwidth and high up has the WRX80
Am having trouble with my ram speed.
I can't get them to 3000. When I manually set the speed my system will not boot up.
My old Crosshair VI bios was easy to use when adjusting the ram speed
How can one post pictures?
Do I need a special account 🤔
I upload pics in my private discord copy link then [img] tag them here.
example
URL
https://media.discordapp.net/attachm...678&height=903
IMG tagged
https://media.discordapp.net/attachm...678&height=903
In other news new agesa came out flashed tested, still a few quirks in bios, aquired a few more parts (fans, modded cables, independant cablemods rgb controllers) for the test system but works been hectic so I won't get much accomplished till next weekend.
Only thing I accomplished this past weekend was trading and swapping out my power supply (ax1200) plus cash ($50) for the ax1200i which is much quieter @ idle now and testing the latest agesa/bios.
No longer routing for a team again I'm bad luck haha chief's going to kill the Browns.... :-/
Well i'm sort of disappoint and slightly not disappointed. disappointed No Threadripper 5000, Not disappointed WRX80 is now going DIY along with CPU's, but their Threadripper 3,000 series :-/
https://www.youtube.com/watch?v=MLja1q-M4SU
@ 8:34 secs
you can run fabric clock different from NB clock and Ram ? I'm unaware this wholly true on all boards.
surely that NB clock becomes a bottleneck at the asynchronous mode with the 2:1 setting.
5000mhz ram (2500mhz) / 1250mhz NB / 1800 (3600 mhz infinity clock )
the only way to get to run back up to 1900mhz is to run ram at 7600mhz!!! O.o~!
so what does the NB speed run on the just above 3800mhz when you do hit the 2:1 mode when it changes ?
You can run async yes on just about any board. it does not require double the speed to match performance but there are other stutter related issues in games that make it a moot point and sync is just better. Honestly not much gains to be had as before I ran stock with xmp loaded but mem speed @ 3200 vs -15 curve optimizer PBO +100 and 3800 manual tuned timings and gained 2 fps avg in dirt 5 @ 1080P High preset.
Granted this will vary between titles but you should not expect the larger gains we saw with previous versions of ryzen. Even PI 32m running 12-11-11 vs 16-16-16 3800+ i'm not seeing very large gains..... less than 3 secs. really only async is good for "benchmarks" not real world and select few benchmarks at that like geek bench.....
I see far less gains from timings now than previous versions. running 14-14-14 3800 with high v dimm amounts to nothing vs 16-16-16 in actual games. It generates more heat to your mem and gives you better aida results and that's about it.
I did see someone post that you get slightly high fps, but also a lot lower 1% and .01% lows which would make stutter even worse. It was at least 10% worse lows. When your talking about 20-40 fps those are big gaps.
I can't find these test where the did high memory speed with low Infinity fabric clocks now :/ .Like 3800mhz ram with 1200mhz infinity fabric.
one thing I don't get is how bandwidth isn't hampered by the lower northbirdge speed & latency hit. Like you said your AIDA results are high, but that's about it. Maybe AIDA just calculating the results from the clock and not actual use.
I'm really wondering if AMD would ever put some kind of L4 cache on the IO die eventually, I'm not sure it would help in anyway.
as for RDN3 I could see the IO die being a benefit to weed out late frames before out putting them to the monitor. A way to keep more consistency on frame times. The L4 cache on that would help off load frames into that buffer and be accolated to out put.
The only problem I see with this approach is what happened when you do want to use multi-gpu then, you'll need some way for those IO dies to communicate. Infinity fabric is one way around that but is/will it be big enough.
I must ask this, because I'm still on an ye old phenom II, but is this, curve optimizer sure seems like the old A.O.D with the old 1st gen phenom the 9000 series with + and - for the core is it ?
I was pretty sure that ryzen master is basically a newer slightly better a.o.d
I'm wondering if I could just upgrade to a low Athlon 3000G because I'm playing mostly much older games that don't seen too much gpu power.
My systems are not extreme but I love them just the same. My main rigs? Are an HTPC AMD and the main rig is;
AMD 3700G at stock. Played with overclocking but no biggie.
Cooler Master Hyper V 212.
Cooler Master HAF full tower Case
EVGA 850 PSU
16GB of Crucial Ballistix RAM CL16
Asus Prime 550 mobo
AMD RX 580 CTS 8GB DDR5 Video card
Creative SB AE-5 sound card
4TB 256MB cache WD Black HDD
50" Samsung 8 series 4K TV and 24" Asus I IPS monitor for daily use.
Sound is is my Mach One Like DIY speakers and a JVC 778 Stereo Receiver for music. Pioneer 5.1 Receiver for games.
Windows 11 Pro
So far, one of my favorite computers! I upgraded from a 7-year-old Intel i7 4700K Haswell LOL! next project is an SSD.
I manage to snag a refurbished EVGA FTW3 RTX 2080 TI for $865 from Micro--Center. unfortunately, I also caught Covid-19 after picking it up. :/ That was back during Thanksgiving. My Mother caught it, and brother caught it, from me but pneumonia with it.
they're better now. My brother got bad to the point he was in ICU but he's slowly getting over it now.
I'm Happy my whole system cost less than a RTX 3090 LOLQuote:
I have one of those GPUs... still a beast. Undervolt/Overclock it :up:
I'm ready for Zen4. The rumor is it will be announced in June and available Aug/Sep as in the past. Just hope DDR5 6000 is available because the platform will be DDR5 only.
I still like my AMD 7800X3D.
Waitung for the AMD 8000 CPU.
Does anybody know if they offer 8000X3D as well?
The next gen server chips have 3d cache so I assume the consumer ones will too. The next gen also has efficiency cores so I would not jump on it until windows 12 when we get a proper task scheduler. Windows 11 has a hard time with 3d cache and E/P cores, so I have a feeling doing both asymmetrically will be even worse.
Well, got me some "new" stuff on a steal-deal.
Spend € 599,00 on some new "öld-stock":
Asus ROG CROSSHAIR VIII EXTREME
X5900 B2 Vermeer
G.Skill F4-3600C14Q-64GTRG
Samsung 980 Pro M.2 SSD 2TB
It`s quiet, more than double the "feel" of speed compared to my 5960X-X99 Classified and 4x 4gb B-die.
And got it for the retail price of a 7950X, and a full 2 years of warranty.
Damn, do times change.:rofl:
AM5/6 is boring to me I'll be moving to SP6 or whatevers after WRX80, TRX40 was too limited.