THE LORD Mar 3, 2023 @ 7:58pm
Has AMD lost their mind with the 7950X3D?
With MB included, it's many hundreds of $$$ more than the 13900K.

And how much is the performance gain in the 7950X3D over the 13900K?
< >
Showing 16-30 of 46 comments
PopinFRESH Mar 6, 2023 @ 1:52am 
Originally posted by :
...I wouldn't recommend using the 3D cpu's yet, they're just designed to run max speed all the time...

Not sure where you got this impression but you have it completely backwards. The 5800X3D are BIOS limited to not be overclocked because of the thermally sensitive Vcache. They aren't limited on stepping down when idle or low workload.
The Presence Mar 6, 2023 @ 5:29am 
Originally posted by Rumpelcrutchskin:
Pretty sure why they pushed back the 7800X3D release is that it will probably perform almost as good as 7950X3D when gaming and they didn`t want it to ruin the sales of lot more expensive 7900X3D and 7950X3D.
That's why I'm holding off on buying an AM5 board and AMD CPU right now.
Don't be fooled by the written TDP of the CPUs. You will barely scratch the power of the 13900k while gaming. 13900k beats the AMD's current 16 cores and 32 threads CPU 7950x in both Single-core and multi core score. You won't be even using 25% of its power while gaming. Which means- If the total TDP of 13900k is 300w, In gaming it will pull below 100 watts of power. If you had any mid-range CPU like the i5 12600k, it would pull the same amounts of power to do same tasks.

Just because 13900k has the headroom, doesn't mean that it will ALWAYS use full power.
It will use the power, only when you need /demand it.
Crawl Mar 6, 2023 @ 7:26am 
On average in gaming tests the 13900K pulls about 140w, the 13900KS is around 150w while the 7950X is about 105W and the 7959X3D is only pulling 70W. Synthetic benchmarks are one thing but in actual real world use cases AMD trades blows if not outright beats Intel and at far less power consumption.
Even if the 13900K pulls 140w during gaming as Mr. Crawl said above, and 7950x pulls 105w, that's 35 watts saving. 35 watts is a typical room LED bulb, or a table fan's wattage. If a person game 2 to 3 hours a day on average, it's negligible.

An air conditioner machine pulls 1500 to 2000 watts btw.
Ralf Mar 6, 2023 @ 9:40am 
I find it very strange the the 7950X3D uses the same power in games as the 7700 which has half the cores.

Would like to see a how the 7800X3D is compared to the 13900K with E-cores disabled. I cant find power consumption reviews for the 13900K or 13700k with e-cores off.
Crawl Mar 6, 2023 @ 9:52am 
Originally posted by Ralf:
I find it very strange the the 7950X3D uses the same power in games as the 7700 which has half the cores.

Because the 7950X3D "parks" or "sleeps" the non 3D cached cores for gaming since they aren't really required. I think you could get similar numbers from the 7950X as well with a bit of tuning and not give up any performance. This generation of CPUs really pushed the power to get every ounce of performance but its a really inefficient way to do it.
Last edited by Crawl; Mar 6, 2023 @ 10:17am
Originally posted by Ralf:
I find it very strange the the 7950X3D uses the same power in games as the 7700 which has half the cores.

Would like to see a how the 7800X3D is compared to the 13900K with E-cores disabled. I cant find power consumption reviews for the 13900K or 13700k with e-cores off.
Yes, pretty much what the above post states.

If you're interested in further reading, the term "dark silicon" might be interesting to look up.

https://en.wikipedia.org/wiki/Dark_silicon

Ryzen 9s (and Core i9s) are for highly threaded tasks. They are rather unnecessary for just gaming, because most of the cores won't be utilized. This is how a 7950X might be (effectively) two 7700Xs, but can get close to the same power used in gaming tasks.

As for the 3D processors, they use less power simply because they have lower boost clock speeds. In (not all, but many) games, the cache more than compensates for this clock speed loss. So the end result is a CPU that performs better for less wattage.

That's how a 7950X3D can get close to the 7700X in power use (circumstantially) with better results.
Originally posted by Crawl:
This generation of CPUs really pushed the power to get every ounce of performance but its a really inefficient way to do it.
Both Intel and AMD have been doing this for some time. Strong competition paired with less avenues to get CPU gains had led to them doing so.

However, there exist far more efficient offerings, too. Namely, look at the 3700X vs 3800X or 5700X vs 5800X (especially this comparison). Or look at either of the x700 CPUs with PBO on versus off, or any of the x800 CPUs with eco mode on versus normal (off). You often pay for that last stretch of performance with a disproportionate amount of power draw and heat.

Similar trend is occurring on GPUs.

I would imagine this trend is part of why undervolting is becoming popular in recent years.
PopinFRESH Mar 6, 2023 @ 7:43pm 
Originally posted by Crawl:
...Because the 7950X3D "parks" or "sleeps" the non 3D cached cores for gaming since they aren't really required....

Overall concurred with your post, however, it isn't that they aren't "required". It would still park those cores while running an identified gaming workload even if those cores were "required". AMD has essentially reintroduced the same issues with Zen1, Zen2, and Zen2+ architectures except rather than it being the memory interface it is with the L3/Last Level Cache. They are simply doing the same thing as "Game Mode" but in an OS aware method to have the OS scheduler not use those cores rather than doing so via UEFI disabling them. The issue is due to the latency penalty of having to go cross CCD to access data in that extra L3 Vcache.

One potential upshot for the 7900X3D and 7950X3D is that it is possible that further work with Microsoft on scheduler optimization and "game" thread identification could allow for Windows to run additional non-game processes on the 2nd CCD rather than just leaving them parked.

It will be interesting to see how they approach these high-cache versions of their CPUs as Intel starts to move to their tile based (a.k.a chipset) architectures. Intel has already shown potential designs leveraging EMIB to have on-package main memory. I'd like to see AMD do something like making the additional cache an "L4" cache and placing it sandwiched between the two CCDs connected to both in a similar vein that they moved the memory and other IO functions out of the compute chipsets and into an IO die.
Originally posted by PopinFRESH:
It will be interesting to see how they approach these high-cache versions of their CPUs as Intel starts to move to their tile based (a.k.a chipset) architectures. Intel has already shown potential designs leveraging EMIB to have on-package main memory. I'd like to see AMD do something like making the additional cache an "L4" cache and placing it sandwiched between the two CCDs connected to both in a similar vein that they moved the memory and other IO functions out of the compute chipsets and into an IO die.
Wouldn't that incur a latency penalty to do that too anyway?

It might be better for the multi-CCD chips but certainly not for the single CCD chips (at least, I think) and I'm not sure if AMD will take a different approach on Ryzen 7 versus 9.

It's also possible in the future with die shrinks that more cores per CCD will become a thing? Or maybe an interim where there's, again, multiple CCXs per CCD (Zen 2 and prior were like this)?

Though even if that last one happens, that would mean there's room to have higher than current core counts which means we're back to square one I guess.
Bad 💀 Motha Mar 7, 2023 @ 3:44pm 
Originally posted by 🦜Cloud Boy🦜:
Don't be fooled by the written TDP of the CPUs. You will barely scratch the power of the 13900k while gaming. 13900k beats the AMD's current 16 cores and 32 threads CPU 7950x in both Single-core and multi core score. You won't be even using 25% of its power while gaming. Which means- If the total TDP of 13900k is 300w, In gaming it will pull below 100 watts of power. If you had any mid-range CPU like the i5 12600k, it would pull the same amounts of power to do same tasks.

Just because 13900k has the headroom, doesn't mean that it will ALWAYS use full power.
It will use the power, only when you need /demand it.

Actually it's the opposite though. What Intel lists as the CPU MAX TDP has often been proven that under high loads the actual cpu tdp can reach much higher.
Originally posted by Bad 💀 Motha:
Originally posted by 🦜Cloud Boy🦜:
Don't be fooled by the written TDP of the CPUs. You will barely scratch the power of the 13900k while gaming. 13900k beats the AMD's current 16 cores and 32 threads CPU 7950x in both Single-core and multi core score. You won't be even using 25% of its power while gaming. Which means- If the total TDP of 13900k is 300w, In gaming it will pull below 100 watts of power. If you had any mid-range CPU like the i5 12600k, it would pull the same amounts of power to do same tasks.

Just because 13900k has the headroom, doesn't mean that it will ALWAYS use full power.
It will use the power, only when you need /demand it.

Actually it's the opposite though. What Intel lists as the CPU MAX TDP has often been proven that under high loads the actual cpu tdp can reach much higher.

It's because previously Intel used to list the TDP based on the CPUs base clock only, not turbo.

But they have changed it, now they list both the Base clock and Turbo clock TDP in their specs sheet.

i9-13900k's base power is 125 W, and Turbo power is 253 W.

https://www.intel.com/content/www/us/en/products/sku/230496/intel-core-i913900k-processor-36m-cache-up-to-5-80-ghz/specifications.html
Bad 💀 Motha Mar 7, 2023 @ 4:43pm 
Ahh OK yes it's about time. Makes so much more sense to do it that way.

I wish, not that it matters as much, but to have your OS properly see and ID in plain text, the Turbo not just the Base Clock.

For example if you look at System Properties or System Information or Task Manager in Windows OS it may display the CPU model and Base Clock. But I have to look up the model to find the Turbo which we shouldn't have to do.

If Task Manager can properly have for example Cores vs Threads listed, why not Base Clock and Turbo Clock. Rather then just the Clock showing as the Base.

This is especially true with Laptops because some mobile cpus have a huge difference between Base vs Turbo and it would be helpful to know right there in the model name or what it shows for clocks, what they are supposed to be.

I just had one Laptop I worrked on, and I wasn't very familiar with the mobile 10th gen Intel cpus so everywhere in the laptop specs and in Win10 it's saying 1.2Ghz

But the CPU turbos up to 3.4Ghz quite easily when you allow the power plan to be set to prefer max performance. But you shouldn't have to go look up that cpu model online just to clarify that info.
Last edited by Bad 💀 Motha; Mar 7, 2023 @ 4:44pm
Komarimaru Mar 7, 2023 @ 5:32pm 
Ya, I don't think AMD has ever listed their boost TDP? I know my 5950X can draw well over 220 watts at full load. I'm willing to bet a 7950X pulls way more under max load.
PopinFRESH Mar 7, 2023 @ 7:01pm 
Originally posted by Illusion of Progress:
Wouldn't that incur a latency penalty to do that too anyway?

Yes it would still have higher latency, however, as with Zen3/Zen4 it is a consistent latency for all cores. Regarding performance for a single CCD package it would depend on the architecture. It may be better with it as it is now, however, if it was not on top of the CCD the cores may be able to clock higher for it to be a net positive.
< >
Showing 16-30 of 46 comments
Per page: 1530 50

Date Posted: Mar 3, 2023 @ 7:58pm
Posts: 46