Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Not sure where you got this impression but you have it completely backwards. The 5800X3D are BIOS limited to not be overclocked because of the thermally sensitive Vcache. They aren't limited on stepping down when idle or low workload.
Just because 13900k has the headroom, doesn't mean that it will ALWAYS use full power.
It will use the power, only when you need /demand it.
An air conditioner machine pulls 1500 to 2000 watts btw.
Would like to see a how the 7800X3D is compared to the 13900K with E-cores disabled. I cant find power consumption reviews for the 13900K or 13700k with e-cores off.
Because the 7950X3D "parks" or "sleeps" the non 3D cached cores for gaming since they aren't really required. I think you could get similar numbers from the 7950X as well with a bit of tuning and not give up any performance. This generation of CPUs really pushed the power to get every ounce of performance but its a really inefficient way to do it.
If you're interested in further reading, the term "dark silicon" might be interesting to look up.
https://en.wikipedia.org/wiki/Dark_silicon
Ryzen 9s (and Core i9s) are for highly threaded tasks. They are rather unnecessary for just gaming, because most of the cores won't be utilized. This is how a 7950X might be (effectively) two 7700Xs, but can get close to the same power used in gaming tasks.
As for the 3D processors, they use less power simply because they have lower boost clock speeds. In (not all, but many) games, the cache more than compensates for this clock speed loss. So the end result is a CPU that performs better for less wattage.
That's how a 7950X3D can get close to the 7700X in power use (circumstantially) with better results. Both Intel and AMD have been doing this for some time. Strong competition paired with less avenues to get CPU gains had led to them doing so.
However, there exist far more efficient offerings, too. Namely, look at the 3700X vs 3800X or 5700X vs 5800X (especially this comparison). Or look at either of the x700 CPUs with PBO on versus off, or any of the x800 CPUs with eco mode on versus normal (off). You often pay for that last stretch of performance with a disproportionate amount of power draw and heat.
Similar trend is occurring on GPUs.
I would imagine this trend is part of why undervolting is becoming popular in recent years.
Overall concurred with your post, however, it isn't that they aren't "required". It would still park those cores while running an identified gaming workload even if those cores were "required". AMD has essentially reintroduced the same issues with Zen1, Zen2, and Zen2+ architectures except rather than it being the memory interface it is with the L3/Last Level Cache. They are simply doing the same thing as "Game Mode" but in an OS aware method to have the OS scheduler not use those cores rather than doing so via UEFI disabling them. The issue is due to the latency penalty of having to go cross CCD to access data in that extra L3 Vcache.
One potential upshot for the 7900X3D and 7950X3D is that it is possible that further work with Microsoft on scheduler optimization and "game" thread identification could allow for Windows to run additional non-game processes on the 2nd CCD rather than just leaving them parked.
It will be interesting to see how they approach these high-cache versions of their CPUs as Intel starts to move to their tile based (a.k.a chipset) architectures. Intel has already shown potential designs leveraging EMIB to have on-package main memory. I'd like to see AMD do something like making the additional cache an "L4" cache and placing it sandwiched between the two CCDs connected to both in a similar vein that they moved the memory and other IO functions out of the compute chipsets and into an IO die.
It might be better for the multi-CCD chips but certainly not for the single CCD chips (at least, I think) and I'm not sure if AMD will take a different approach on Ryzen 7 versus 9.
It's also possible in the future with die shrinks that more cores per CCD will become a thing? Or maybe an interim where there's, again, multiple CCXs per CCD (Zen 2 and prior were like this)?
Though even if that last one happens, that would mean there's room to have higher than current core counts which means we're back to square one I guess.
Actually it's the opposite though. What Intel lists as the CPU MAX TDP has often been proven that under high loads the actual cpu tdp can reach much higher.
It's because previously Intel used to list the TDP based on the CPUs base clock only, not turbo.
But they have changed it, now they list both the Base clock and Turbo clock TDP in their specs sheet.
i9-13900k's base power is 125 W, and Turbo power is 253 W.
https://www.intel.com/content/www/us/en/products/sku/230496/intel-core-i913900k-processor-36m-cache-up-to-5-80-ghz/specifications.html
I wish, not that it matters as much, but to have your OS properly see and ID in plain text, the Turbo not just the Base Clock.
For example if you look at System Properties or System Information or Task Manager in Windows OS it may display the CPU model and Base Clock. But I have to look up the model to find the Turbo which we shouldn't have to do.
If Task Manager can properly have for example Cores vs Threads listed, why not Base Clock and Turbo Clock. Rather then just the Clock showing as the Base.
This is especially true with Laptops because some mobile cpus have a huge difference between Base vs Turbo and it would be helpful to know right there in the model name or what it shows for clocks, what they are supposed to be.
I just had one Laptop I worrked on, and I wasn't very familiar with the mobile 10th gen Intel cpus so everywhere in the laptop specs and in Win10 it's saying 1.2Ghz
But the CPU turbos up to 3.4Ghz quite easily when you allow the power plan to be set to prefer max performance. But you shouldn't have to go look up that cpu model online just to clarify that info.
Yes it would still have higher latency, however, as with Zen3/Zen4 it is a consistent latency for all cores. Regarding performance for a single CCD package it would depend on the architecture. It may be better with it as it is now, however, if it was not on top of the CCD the cores may be able to clock higher for it to be a net positive.