Do you think it is a yield issue or a potential one for the 8 core CCX chips? Maybe they are trying hoard them for epyc?
Do you think it is a yield issue or a potential one for the 8 core CCX chips? Maybe they are trying hoard them for epyc?
5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi
6800xt 3ghz
Amd Nvidia/Ati -3dmark06 scorebord revisted
asus L1N64-ws or /b depending on bios chip
4x1gig 8500 gkill bpk
2x opteron 8224 @ 3.8ghz
http://www.xtremesystems.org/forums/...&postcount=236
vga= 8800gt
winxp pro
custom chiller -31 water
2x dtek fuzions
bix3-with x3panaflo hi output
antec 850 quattro
heat under msimax abitmax and dfimax
~1~
AMD Ryzen 9 3900X
GigaByte X570 AORUS LITE
Trident-Z 3200 CL14 16GB
AMD Radeon VII
~2~
AMD Ryzen ThreadRipper 2950x
Asus Prime X399-A
GSkill Flare-X 3200mhz, CAS14, 64GB
AMD RX 5700 XT
I doubt that is even a thing tbh.
typically speaking the dies before they even make it onto a pcb are binned and of course they bin down so anything we get on desktop did not make the cut for enterprise.
It's rather simple, AMD is the performance leader now so they can charge whatever they want. If anything i'm not complaining about 5800x price its more like I feel like they just priced 5900/5950 to low and it's going to steal market share from 5800x. The gap between 6-8-12-16 isn't very linear price wise so much so that I see no room for the rumored 10 core variant ( big grain of salt there ).
Anyway to break it down
5600 $300
5800 $450 2 cores cost $150 more
5900 $550 4 cores cost $100 more
5950 $800 4 cores cost $250 more
Basically the cost per core makes 0 sense whatsoever.
What would make alot more sense is
5600 $300
5800 $400 2 cores cost $100 more
5900 $600 4 cores cost $200 more
5950 $800 4 cores cost $200 more
Making every core pricing linear and worth $50 per core to AMD.
Current pricing the only chip that makes sense to buy is the 5900 as it's the best value by far.
Current pricing the 5800x is the least value by far.
Last edited by chew*; 11-25-2020 at 10:32 PM.
heatware chew*
I've got no strings to hold me down.
To make me fret, or make me frown.
I had strings but now I'm free.
There are no strings on me
I was looking at the GPU-z Pixel fillrate.
The 6800 Xt is showing 288.0 Gpixel/s While a RTX 3090 shows 189.8Gpixel/s That card isn't anywhere near that near fast. it's 7% slower on 1920 x 1090 10% slower on 2560 x 1440 and 15% slower on 3840 x 2160, this is based of Techpowerup's review
how the hell is does it have 50% more fill rate and end up slower? That can't be the rite fill rate for the card if it's consistently slower on every metric, and that's with out RTRT on.
I dunno in reviews I read it won in many things at 1080 1440 lost @ 4k.
Hands on with it just fooling around in some games and it's not a best case scenario with a 9900K the 6800xt is pretty damn fast. I've commonly seen this particular card running at 2.4+ auto
Last edited by chew*; 11-26-2020 at 07:37 PM.
heatware chew*
I've got no strings to hold me down.
To make me fret, or make me frown.
I had strings but now I'm free.
There are no strings on me
5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi
I have to say launch public drivers this time around are so far for me bug free.
heatware chew*
I've got no strings to hold me down.
To make me fret, or make me frown.
I had strings but now I'm free.
There are no strings on me
That is a nice looking build
5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi
Seems ok with out Raytracing.
it's a bit of conidium their in comparing to 2080 Ti which has less Overall Rasterization compared to RX 6800 XT But and First gen Raytracing like RX 6800 XT
Then trying to compare to RTX 30 series you got about the same Rasterization but second Gen Raytracing.
Looks like They just need more Ray accelerators on it over all ( techpower list 6800 Xt as having 60 ray accelerators when I'm pretty sure it has 72 ray accelerators) The Ray Accelerators seem to be part of the CU's
is that based of simple math only?
Or some kind unique code?
I concur. I like that case, but just isn't big enough for me lol
Just wish I could find a case with a fan hole or fan mount for a fan behind the motherboard.
I need some help to figure out my code 97. I installed my new crosshair viii last weekend everything was working. Played Apex legends for one hour all settings stock on my sig computer. After shutting down to have dinner with my family, my pc will not boot up shows code 97. I pulled out a msi 970 from my son's pc and it worked on my pc. Installed my Radeon VII in his black screen. Is there any way to get the said graphics card working again. It's less than a year old.
MAIN RIG--:
ASUS ROG Strix XG32VQ---:AMD Ryzen 7 5800X--Aquacomputer Cuplex Kryos NEXT--:ASUS Crosshair VIII HERO---
32GB G-Skill AEGIS F4-3000C16S-8GISB --:MSI RADEON RX 6900 XT---:X-Fi Titanium HD modded
Inter-Tech Coba Nitrox Nobility CN-800 NS 800W 80+ Silver--:Cyborg RAT 8--:Creative Sound BlasterX Vanguard K08
Here's an idea of how much power this card has and keep in mind it's on a rather outdated stock running 9900K platform. With those mins you can easily lock fps at 144/144hz and have a 100% consistent gaming experience with 0 screen tearing.
PS for reference the 5700/xt are in the 120's avg @ 1080P on the same system.
Last edited by chew*; 11-30-2020 at 10:41 AM.
heatware chew*
I've got no strings to hold me down.
To make me fret, or make me frown.
I had strings but now I'm free.
There are no strings on me
For AMD to calculate performance values you have; ROPs * clock speed for the pixel fill rate, texture fill rate is physical shaders (TMU) *clock speed, and flops are logical shaders*clock speed*2. They do not take into account IPC or efficiency.
Nvidia has changed their core structure a few times since they went to unified shaders, and they renamed their shader count to logical shaders like AMD does when they locked the clock ratio at 2x core frequency.
Last edited by zanzabar; 11-30-2020 at 11:17 AM.
5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi
heatware chew*
I've got no strings to hold me down.
To make me fret, or make me frown.
I had strings but now I'm free.
There are no strings on me
Ok so pretty simple
taken IPC deficiency, because scaling can't be prefect then putting it at about 60-65% of that pixel fill rate seem about right.
so what's the Nvidia Math that seems more accurate ? if you have any idea....
I've killed one card with a metal fan clip on the back of it over the gpu die area while it was plugged in :/
I might have killed another similarly, power supply didn't thoroughly dissipate it's charge after being unplugged. Battery to the board fell on it. landed in the same spot as the fan clip.
I'm disappoint in the, PC enthusiast, PC gamer's "PC master race" as whole lately just impatient and the unwilling to wait for stock to fill up, but ok with buying things as soon as they release.
people where ok with buying $1,200 RTX 2080 TI's when it's MSRP price was what $800 now they want to complain about it during a world wide a pandemic. after production was stopped for a few months.
All you have to do wait three months and not buy anything, stock volume will pick up by then.
Last edited by demonkevy666; 12-02-2020 at 08:18 PM.
They are the same thing now. In the past Nvidia had a shader clock instead of logical cores so you would multiply the shaders by the shader clock (not core clock.) The ROPs and TMU are the same math. The big change is that NV more physical shaders that can do less threads (not a great analogy but it helps) so amd is counting bulldozer style to get higher flops that games wont utilize. With amd RDNA has fixed some of it but only the main shader in a unit can do things like schedule out of order operations. AMD also has a bottleneck getting data between the ROPs and the shader clusters that Nvidia does not have. Since most games are made a favor NV due to their larger market share it make the IPC look bad for AMD. On properly coded/optimized games that make full use of asynchronous compute the bottleneck is much less.
5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi
I gave all the cpu's a quick run over the past couple days. No outliers imo. avg or below avg at best. I'll dig in more over next few days.
One of the things I am interested in testing is 5950x 16 cores smt off versus 5800x in games to see if there's any uplift to be had there.
Last edited by chew*; 12-04-2020 at 09:58 AM.
heatware chew*
I've got no strings to hold me down.
To make me fret, or make me frown.
I had strings but now I'm free.
There are no strings on me
"I'm disappoint in the, PC enthusiast, PC gamer's "PC master race" as whole lately just impatient and the unwilling to wait for stock to fill up, but ok with buying things as soon as they release.
people where ok with buying $1,200 RTX 2080 TI's when it's MSRP price was what $800 now they want to complain about it during a world wide a pandemic. after production was stopped for a few months.
All you have to do wait three months and not buy anything, stock volume will pick up by then."
I agree, I got lucky with my case and artic freezer II 280 and patiently waited till in stock and not scalper pricing but i'm fairly certain i just got lucky. Otherwise i wait. Me first usually comes with bugs....
heatware chew*
I've got no strings to hold me down.
To make me fret, or make me frown.
I had strings but now I'm free.
There are no strings on me
heatware chew*
I've got no strings to hold me down.
To make me fret, or make me frown.
I had strings but now I'm free.
There are no strings on me
Shader clocks are linked to core clock on Nvidia now aren't they ? I remember the shader clock differences vs core.
lol I almost wanted to call it the bulldozer of gpu's expect it actually has a performance increase. Just not great Ray tracing yet, still waiting on that "Super Resolution" (the FOMO on DLSS is retarded along with this Raytracing FOMO)
So what's making the bottleneck on the ROP's ? I can't find anything on it the Architecture it's all about the CU's shaders and infinity cache and Ray Accelerators. I was thinking the scheduler would be the bottleneck there ?
I was thinking AMD could add more separate scheduler to both the ROP's and the Ray Accelerators. However waiting to synchronize all that could be a nightmare. Some sort of bypass to the Ray accelerators in the setup now seems more logical and quicker, maybe that's why it's out of order execution though too.
The scheduler works with the ROPs directly, and in the past the ROPs could not do out of order and had to be scheduled in clusters. One of the big reasons that AMD chips did not scale well when they got larger (vega/fiji) was the way the rops worked. It required too much latency so they were bad at high framerates. They had it detailed in the RDNA2 white paper. There was coverage but all I am finding now is about the rx6800 and not the rdna2 reveal.
5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi
Got a 3900X open box at Micro Center last month for $340. Not a great chip but stable at 4450/4300 CCD1/2 at 1.306v, 1900 FCLK 1.12v SoC with a 360mm MSI AIO. CCX0/1 are equal and CCX2/3 are equal, can't get 25 MHz higher on any of them. Runs like 4525 in single thread loads stock / with PBO on so OC was worth it, the ST performance is basically the same. I guess I could have tried RMA for not hitting boost at all but
Posted on reddit because I was bored and some people went nuts and called me an idiot / CPU murderer for using 1.3v so I just deleted the thread and moved on.
Still waiting on ASUS reference 6800XT to ship, ordered from ProVantage on 25th, no estimated ship date. Might end up trying to snag a 3070 or 3080 instead if I don't get a ship date soon.
Last edited by BeepBeep2; 12-10-2020 at 09:37 AM.
Smile
Thank mate for the info I rma the radeon Vii and am still waiting for the for XFX to resolve the issue.
Well ordered a MSI Radeon RX 6900 XT today was lucky to get hold of one at alternate.de.
Can?t wait to get my hands on it My boy will get the Radeon VII if xfx repairs it.
MAIN RIG--:
ASUS ROG Strix XG32VQ---:AMD Ryzen 7 5800X--Aquacomputer Cuplex Kryos NEXT--:ASUS Crosshair VIII HERO---
32GB G-Skill AEGIS F4-3000C16S-8GISB --:MSI RADEON RX 6900 XT---:X-Fi Titanium HD modded
Inter-Tech Coba Nitrox Nobility CN-800 NS 800W 80+ Silver--:Cyborg RAT 8--:Creative Sound BlasterX Vanguard K08
the PBO for single core on 3000 is highly dependent on cooling solution and highly dependent on application. CB single for example don't expect miracles. even 32m pi don't expect miracles. 8m pi yes you can catch the speeds in Hardware info with the refresh speed set to the fastest setting and sometimes catch in in AOD.
With 5000 series chips you don't need voodoo magic to catch peak PBO clock speeds in monitoring programs. They are more than happy to boost to those speeds quite often.
I haven't spent a ton of time yet on the hardware but so far i see a trend with temps vs RT on off. example godfall/dirt 5 run RT and I'm seeing 80c avg where as in pure rasterization i see 74c avg temps.
Last edited by chew*; 12-10-2020 at 11:24 AM.
heatware chew*
I've got no strings to hold me down.
To make me fret, or make me frown.
I had strings but now I'm free.
There are no strings on me
Bookmarks