Asenna Steam
kirjaudu sisään
|
kieli
简体中文 (yksinkertaistettu kiina)
繁體中文 (perinteinen kiina)
日本語 (japani)
한국어 (korea)
ไทย (thai)
български (bulgaria)
Čeština (tšekki)
Dansk (tanska)
Deutsch (saksa)
English (englanti)
Español – España (espanja – Espanja)
Español – Latinoamérica (espanja – Lat. Am.)
Ελληνικά (kreikka)
Français (ranska)
Italiano (italia)
Bahasa Indonesia (indonesia)
Magyar (unkari)
Nederlands (hollanti)
Norsk (norja)
Polski (puola)
Português (portugali – Portugali)
Português – Brasil (portugali – Brasilia)
Română (romania)
Русский (venäjä)
Svenska (ruotsi)
Türkçe (turkki)
Tiếng Việt (vietnam)
Українська (ukraina)
Ilmoita käännösongelmasta
if you need higher fps, its an easy way to get more when not cpu limited
If that's not having your mind made up, then what is it?
The 5950X is indeed the better CPU "overall" if you can use all of its collective computing capability at once. That means, in highly multi-threaded stuff. That's not gaming.
So in games, it goes the other way, because the v-cache of the 5800X3D basically gives it higher effective single core performance. Per core performance > core count for games. On average, it's on par with the Ryzen 7000 series non-X3D in games (and often, it's even higher).
Your thread asked about gaming, so... that's how I answered.
The thing is that you're looking at gaming results at 4K, which masks the difference between the CPUs because the GPU is far more of a bottleneck at 4K. A situation where the GPU is the bottleneck sort of invalidates a comparison that's trying to show the difference between two parts which aren't. That should be obvious.
If faster CPUs aren't showing a difference at 4K, then maybe you don't need a faster CPU yet. The fact is, the 5950X is little more than a pair of 5800Xs on the same chip, and that's only going to help for highly multi-threaded stuff. Games don't need that amount of cores, and the cross CCD latency may even hurt it and make it perform a bit lower than your current 5800X in some games. For gaming, It's a sidegrade from your current 5800X.
Which is why I advised you to save your money. But it's your money.
I'm hoping this is a joke?
If you believe bottleneck calculators, I'm not sure it's worth continuing to try and reason anything. Those fly in the face of how computers and software fundamentally operate. A PC is a combination of parts, and software loads a PC in far too variable of way for it to be "calculated". It varies from application to application, and even in the same application, it varies moment to moment. There's far too many variables. Trying to "average" that is absolute nonsense.
But a lot of people like these "search the web and get a boiled down to one number" answer instead of learning anything I guess. I mean, I get it, it's complicated, and I'm not going to tell anyone they're bad for not wanting to keep up with it all. None of us keep up with it all. But, sincerely, my last recommendation would be to start by ignoring bottleneck calculators.
|
|
Being insufferably antagonistic for no reason. It would be better to just not type anything.
5950X is technically faster, as it reaches higher clock frequency, and that counts for game performance. The 5800X3D has more memory cache, which can count figuratively has faster because it makes games have more FPS. Depends how you interpret the statement.
I look at results at 4K because that's what I want and need, and what I asked for.
The calculator can be used as reference. If it can't, than any argument about anything is pointless as well, including all you just wrote.
Whatever...
HIgher advertised clocks on the 5950X doesn't equate to higher performance because the average frequencies are typically more likely to be lower because you're balancing the same default power budget across twice as many cores. You would have to tune it quite a bit with PBO just to more often match a stock 5800X.
That makes me an "insufferable antagonist" here?
You had your mind made up before creating the thread, and now you're blaming anyone who gives an answer you don't like?
Take a look at the entire thread. Almost everyone else is giving you the same answer I am. So why am I being singled out? Because I'm providing support for my claim against you're "no you're wrong" and name calling? Here's the answer most of us gave you.
"The 5800X3D is faster than the 5950X in gaming, but at 4K, neither are worth buying to replace your 5800X".
That's it. That is your most common answer, and it is the correct one, because the 5800X3D is not worth its asking price as of now (really, neither is the 5950X), and neither are what is holding you back most at 4K anyway.
Your 7800 XT is your much larger limitation at 4K.
Or... or, and hear me out... like I said, we can skip the arbitrary specifications in a vacuum part and look at real world results that aren't intentionally bottlenecked by another part.
Who actually sits there and tells themselves "no, those real world results must be meaningless because one has higher MHz"?
I know why you looked at 4K results. But what you don't seem to understand is that those results are going to mask the difference between CPUs because 4K is so much more GPU bottlenecked.
What's odd though is that only looking at 4K results should have also showed you that your current 5800X will be more or less the same (at least compared to the 5950X).
What is this even saying?
"If someone can't reference a bottleneck calculator, then nobody has a point on anything"?
I went from a 3900X to a 10850K and tested at 1080p with a 2080 Ti, the performance difference was pretty substantial, anywhere between 10 to 30 percent which is consistent with both the higher clocks and at least 10% higher single core performance as well as lower latency
Zen2 had a lot of latency because the I/O die was split from the CPU die and it was even worse with Ryzen 9, each subsequent generation has improved on that issue to some degree which improves lows
Going from Zen3 to Zen3 will make basically no difference
going to be pretty much the same as the 4000 series with less power the real improvement
will come once dlss5 is released later next year.pretty sure this release will be geared
towards AI over gaming.dlss4 will be released with the 5000 series which should work
with top tier 4000 series not sure about 4070 and lower.IMO dlss is the 4k and 8k
future.and as for cpu for 4k 12900k and up are all you need as your gpu does most
of the heavy lifting.the 5090 is the only card in question for me there should be some
improvement there.
Games usually don't benefit from more than 6 cores because games have few main threads.
Which cpu is better - looking at gaming benchmarks will answer that. A dumb question really.
What is likely to happen though is the one with more cores will chuck out a lot more heat.
And the split die may impact fps.
The real question is whether a midrange gpu can cut it at 4k.