Tiberius Sep 18, 2022 @ 10:34pm
16core cpu
Is there even a game that actually benefits from a 16core cpu (compared to 8core cpu)? I've been comparing the gaming performance on my gaming pc and my work pc (9900k vs 5950x) and i saw little to no benefit from having more cores.

I was planning to upgrade my gaming pc cpu, but it looks like theres no point in doing so
Last edited by Tiberius; Sep 18, 2022 @ 10:35pm
< >
Showing 16-30 of 31 comments
A&A Sep 19, 2022 @ 6:27am 
I know one
Hearts of iron 4
But because the A.I. is limited to 1 core, still it will lags. But yeah, it will make the calculations little bit faster
Soulreaver Sep 19, 2022 @ 7:58am 
In Cyberpunk 2077.

My 9900k is realy close to being a relevant bottleneck and I'm playing at 3840x1600.
Last edited by Soulreaver; Sep 19, 2022 @ 8:01am
r.linder Sep 19, 2022 @ 8:46am 
Originally posted by Soulreaver:
In Cyberpunk 2077.

My 9900k is realy close to being a relevant bottleneck and I'm playing at 3840x1600.
Cyberpunk is mostly GPU bound, 9900K shouldn't struggle that much.
I get over 30 frames back at max settings just by dropping from 1440p to 1080p
Soulreaver Sep 19, 2022 @ 8:51am 
Originally posted by r.linder:
Originally posted by Soulreaver:
In Cyberpunk 2077.

My 9900k is realy close to being a relevant bottleneck and I'm playing at 3840x1600.
Cyberpunk is mostly GPU bound, 9900K shouldn't struggle that much.
I get over 30 frames back at max settings just by dropping from 1440p to 1080p

If it should or shouldn't I don't know but I probably spend around 6hrs testing the game the past few days and I can for sure say I had occasions where I was in my 60-70%.
Ralf Sep 19, 2022 @ 9:03am 
Originally posted by Soulreaver:
Originally posted by r.linder:
Cyberpunk is mostly GPU bound, 9900K shouldn't struggle that much.
I get over 30 frames back at max settings just by dropping from 1440p to 1080p

If it should or shouldn't I don't know but I probably spend around 6hrs testing the game the past few days and I can for sure say I had occasions where I was in my 60-70%.
https://www.tomshardware.com/news/cyberpunk-2077-cpu-scaling-benchmarks
Soulreaver Sep 19, 2022 @ 9:18am 
Originally posted by Ralf:
Originally posted by Soulreaver:

If it should or shouldn't I don't know but I probably spend around 6hrs testing the game the past few days and I can for sure say I had occasions where I was in my 60-70%.
https://www.tomshardware.com/news/cyberpunk-2077-cpu-scaling-benchmarks

Thanks Ralf. But this game ist at Version 1.6 now. The testing in your Link was done with Version 1.04.

Most settings have changed. Especially the demanding ones and the variety of demands of different areas is vast and I'm doubtfull were took into consideration when doing a broud CPU evaluation.

I also don't want to make my testing results a bigger part of this thread anyway.
Last edited by Soulreaver; Sep 19, 2022 @ 9:21am
r.linder Sep 19, 2022 @ 9:57am 
Originally posted by Soulreaver:
Originally posted by Ralf:
https://www.tomshardware.com/news/cyberpunk-2077-cpu-scaling-benchmarks

Thanks Ralf. But this game ist at Version 1.6 now. The testing in your Link was done with Version 1.04.

Most settings have changed. Especially the demanding ones and the variety of demands of different areas is vast and I'm doubtfull were took into consideration when doing a broud CPU evaluation.

I also don't want to make my testing results a bigger part of this thread anyway.
Nah it's still pretty accurate, if not easier to run.

When I tested 1080p with Psycho settings, I was getting minimum 80~90 FPS in the middle of the city near Jig Jig Street, normally at 1440p with the same settings the range in the same area the range is 60~70 FPS at best with the same settings. This is all with DLSS on Balanced, not Quality.

An overclocked i9-9900K should have no problems handling that. It's not as considerable of a bottleneck as my old R9 3900X, which gained pretty much nothing when going down from 1440p to 1080p, and that was after Patch 1.3. My 3900X was a garbage bin chip that couldn't even do better than 4.1.
Last edited by r.linder; Sep 19, 2022 @ 10:01am
Soulreaver Sep 19, 2022 @ 10:46am 
Originally posted by r.linder:
Originally posted by Soulreaver:

Thanks Ralf. But this game ist at Version 1.6 now. The testing in your Link was done with Version 1.04.

Most settings have changed. Especially the demanding ones and the variety of demands of different areas is vast and I'm doubtfull were took into consideration when doing a broud CPU evaluation.

I also don't want to make my testing results a bigger part of this thread anyway.
Nah it's still pretty accurate, if not easier to run.

When I tested 1080p with Psycho settings, I was getting minimum 80~90 FPS in the middle of the city near Jig Jig Street, normally at 1440p with the same settings the range in the same area the range is 60~70 FPS at best with the same settings. This is all with DLSS on Balanced, not Quality.

An overclocked i9-9900K should have no problems handling that. It's not as considerable of a bottleneck as my old R9 3900X, which gained pretty much nothing when going down from 1440p to 1080p, and that was after Patch 1.3. My 3900X was a garbage bin chip that couldn't even do better than 4.1.

I won't continue participation in this. I understand you are not of ill intent but I definetly won't go over countless charts and results of yet around 8hrs of testing and another 4hrs of watching Youtube and reading reddit etc.

You may have your own findings and research. Maybe those are better. But after around 12hrs of research and testing I stand by mine.

Have a good one.
Last edited by Soulreaver; Sep 19, 2022 @ 10:46am
ZeekAncient Sep 19, 2022 @ 3:42pm 
Originally posted by I am Caim!:
Strictly for gaming, more than 8 cores is not necessary, that is why AMD positions the 5800X3D as its gaming flagship. If you are only gaming, then that should be your target.

Things will change as core count increases every generation now, but for the time being, you should be comfortable on a fast 8-core processor with lots of cache, fast memory and recent PCIe revision. The 9900K falls short on all of these three by 2022 standards.

What?! I will agree with you that games(well some games) will benefit from lots of cache and fast memory, but what games will benefit from PCIe revision? That is PCI-e 3.0 vs. 4.0. You will notice that there is almost no difference when running a current gen GPU at PCI-e 3.0 vs. 4.0.

And to say that the 9900K falls short of all these by 2022 standards is utter nonsense. I will agree that the 5800X3D is a faster processor than the 9900K, but outside of 1080p, there is almost no difference in most games between the two. At 1080p, the 5800X3D will perform better in CPU intensive games, but at 1440p that gap will close significanlty. And at 4K there will literally be no difference between the two CPUs.

Only in the rare case of a really CPU intensive game will there much difference. And at 1440p, or 4K, it will still be slight. Most games are GPU bound and intensive. While the 5800X3D is a faster processor with more L3 cache, the 9900K is no slouch and still a performer in 2022. I would have no issue pairing either with a high end GPU. And if playing at a res like 4K, it would make more sense buying a cheaper 8-core CPU, like a 5700X, than it would be to buy a 5800X3D.
Last edited by ZeekAncient; Sep 19, 2022 @ 3:46pm
ZeekAncient Sep 19, 2022 @ 3:49pm 
Originally posted by Soulreaver:
In Cyberpunk 2077.

My 9900k is realy close to being a relevant bottleneck and I'm playing at 3840x1600.

If you are playing at 3840 x 1600, your 9900K WILL NOT bottleneck any GPU you pair with it in Cyberpunk 2077. If you are having performance issues, it most certainly isn't your CPU.

What GPU are you using?

https://www.techspot.com/review/2502-upgrade-ryzen-3600-to-5800x3d/

Look at this article. It is comparing a 3600, a 5600, and a 5800X3D. Now go down to the Cyberpunk benchmark. At 1080p, there is quite a difference between the CPUs. But when you go up to 1440p, there is almost no difference. At 4K, which is close to what you are running, there would literally be no difference.

And there is quite a difference between these 3 CPUs. But aside from 1080p, and CPU intensive games, CPU really doesn't matter as much as GPU in gaming. Most games are GPU bound. These benchmarks were taken with a 6950XT and 6600XT.

Techspot has other articles comparing other CPUs. Here is one comparing the 5800X3D and 5800X.

https://www.techspot.com/review/2451-ryzen-5800x3D-vs-ryzen-5800x/

As you can see, there is a big difference in performance at 1080p. At 4K? Virtually no difference.
Last edited by ZeekAncient; Sep 19, 2022 @ 3:55pm
ZeekAncient Sep 19, 2022 @ 4:19pm 
Originally posted by I am Caim!:
Originally posted by ZeekAncient:
snip

The big problem is the laser focus on average fps, a GPU on a fast bus is going to handle frame rate spikes much better, this should show in some titles but primarily in VR. There's also the question of resizable BAR (only a few Z390 boards received an update to support this, and almost no Z370 did).

It's game dependent, there are some games where you won't feel a difference, there are some games where you'll have quite a mess in your hands by using PCIe 3.0 or 4.0 x8.

Anyway, call it nonsense if you want, but you'd know that the experience on a Zen 3 or Alder Lake system is far, far more refined. The little things really do add up, and the increase in IPC is fast approaching massive if you compare those to Alder Lake's P-cores, let alone Raptor's coming up really soon.

I don't have a 9900K but I was using a 10700K, very similar to a 9900K with a 3070 Ti and Resizable Bar. The 10700K is only PCI-e 3.0, so it is like running PCI-e 4.0 x 8, and there was absolutely no performance loss compared to running on a system that is PCI-e 4.0. Trust me. And every publication online will say the same thing.

Now, like you said, if running a GPU like the 6500XT, and maybe the 3050 if am not mistaken, it could be an issue. As those GPUs are not running at x16. But I guarantee that if you pair one of those GPUs with a 9900K, the bottleneck will not come from the CPU, as most games are GPU bound, and those GPUs will be the bottleneck.

And again. What resolution you are running at makes a big difference. Just look at the article I posted comparing the 3600, 5600, vs. 5800X3D, even the 5800X vs. the 5800X3D. On paper, there is quite a discrepancy between all of those CPUs. And while the 5800X3D is much faster at 1080p, at 1440p that gap between the CPUs is lowered considerably. At 4K, there is virtually no difference. Also, Techspot has other extensive articles comparing all sorts of CPUs. The story is the same.

Now I don't know about VR, and quite frankly don't care, personally. That wasn't what was at topic here. Just straight up gaming performance. And my point remains the same. When running a modern GPU, that is PCI-e 4.0 X 16, the difference between running that GPU with a CPU that is PCI-e 3.0 vs 4.0 capable, will be very negligible.

Originally posted by I am Caim!:
Anyway, call it nonsense if you want, but you'd know that the experience on a Zen 3 or Alder Lake system is far, far more refined.

But this right here IS utter nonsense. Go look at benchmarks. The IPC of Zen 3 is not all that dissimilar than Intel 11th Gen(Rocket Lake and NOT Alder Lake). In fact, a processor like the 5800X is trading blows with a processor like the 11700K. Some games the 5800X is faster. Some games the 11700K. And honestly, the 11700K is not that much better than a 10700K. In fact, the 10700K performs better in some games.

https://www.techspot.com/review/2261-ryzen-5800x-vs-core-11700k-vs-10700k/

The performance between all these procs is almost identical in gaming. Especially at 1440p and 4K. At 1080p, the 5800x is slightly faster(in some games) but not much to make a difference. In some the 10700K is better than both(at all resolutions). And the 9900K is almost identical to a 10700K.

So, to say that the experience is far, far more refined with Zen 3, or Alder Lake, IS utter nonsense.
Last edited by ZeekAncient; Sep 19, 2022 @ 4:46pm
ZeekAncient Sep 19, 2022 @ 5:09pm 
Originally posted by I am Caim!:
We all knew RKL had some performance regressions in gaming, likely due to the new arch after years of Skylake derivative CPUs. This isn't the case anymore, ADL is significantly faster. Zen 3 has always shown very strong gaming performance, especially on the X3D.

There is little point in trying to defend the indefensible, especially since I never said they were weak or that they did not perform well - just that they did not meet this year's performance standard, which is objectively true, these platforms have begun showing signs of age and neither the 9900K nor the 10700K are even close to chart toppers anymore. With DDR5, PCIe 5.0, modern storage controllers and GPUs, we're just beginning to see performance levels that these platforms just weren't designed to support. Otherwise, following that logic, just buy a i7-5775C for cheap. It runs many if not most games as well as the 10600K does.

That also very much considers current generation hardware, with both Ada and RDNA 3 coming up and relying on cache, transfer speed over the PCIe bus might begin to weigh throughout, again, focusing only on average fps, and not in the percentiles and across a narrow suite of games. We don't know yet but we'll soon find out.

Again, I don't know where you are getting your getting your info, but you show no evidence to support your claims. So, you are trying to defend the indefensible. To say that they do not meet this year's performance standards is just ridiculous. Why? Because they don't use PCI-e 4.0? Ridiculous. Like I said, running a GPU in PCI-e 3.0 will not hinder its performance vs. a system that is PCI-e 4.0.

Sure, Alder Lake, and the 5800X3D are faster than these CPUs. But in gaming, that is really only at 1080p. At 1440p, and especially at 4K, there will be no difference. Just go read the articles I posted. There was no difference between the 5800X, the 11700K, and the 10700K. And in turn that would mean the 9900K. In fact, in some games, the 10700K was faster than both the 5800X and 11700K. I also posted some articles comparing the 5800X3D to the 3600, 5600, and 5800X. The story was the same. At 4K, there was virtually no difference.

So, how can you sit there and say that these platforms are starting to show their age? If one is using a highend GPU(or really any GPU), playing at 1440p or 4K, they will see virtually no difference in performance between the 9900K and Alder Lake and the 5800X3D. Especially at 4K. There can be some difference still at 1440p in more CPU intensive games. However, most games are GPU intensive nowadays. So, CPU doesn't really matter as much as GPU does.

The 9900K and 10700K are not the newest CPUs on the block, but to say something like they don't meet 2022's performance standard, or that they are starting to show their age is laughable. The 10700K was released in 2020 and it is more than adequate to meet the needs of games in 2022. And in certain conditions, like playing at 1440p or 4K, will perform just as good as Zen3, and Alder Lake and Rocket Lake. That is the bottom line.

Originally posted by I am Caim!:
We all knew RKL had some performance regressions in gaming, likely due to the new arch after years of Skylake derivative CPUs.

The article showed the 10700K outperforming BOTH the 5800X AND the 11700K. In select games. However, the 1% lows were still better on the 11700K. And again, these performance differences were very minor. Overall, the 11700K is slightly faster in overall games than the 10700K. But my point is that, all three of these processors perform very close to one another. Trading blows. So, while Alder Lake is faster than those three platforms, my point is that the 9900K and 10700K are still more than adequate in 2022. At higher resolutions, like 4K, they will not perform any less than even Alder Lake or the 5800X3D. In fact, if you are playing at 1440p, and especially 4K, and want to save some money, going with a CPU like the 10700K, even in 2022, which is going for very cheap now, might be a better value than buying a more expensive CPU. Even if it is only PCi-e 3.0. As that will make very little difference in gaming.

So, what I was trying to say, is that making a statement that Zen 3, or Alder Lake, will provide a far, far more refined gaming experience than Comet Lake, Rocket Lake, or even Coffee Lake(the 9900K) is just foolish. Again, GPU is what matters, not CPU. Especially at 1440p and 4K.
Last edited by ZeekAncient; Sep 19, 2022 @ 5:25pm
ZeekAncient Sep 19, 2022 @ 5:39pm 
I don't understand why you are posting an anandtech article on the 5775C. What does that have to do with what we are talking about.

And I am not trying to get defensive, but I just thought the notion that the 9900K, or the 10700K, are not meeting 2022 standards was ridiculous. Sure, IPC gains, and overall performance of newer processors will be faster, but in gaming, especially when you are GPU bound, like playing at higher resolutions, these processors will perform just as good as Zen3, Rocket Lake, and Alder Lake.

That is just the point I was making, and proving. The 9900K was released in 2018, and the 10700K was released in 2020, and sure while they were chart toppers at the time, and will not be considered chart toppers today, that doesn't mean they are inadequate. And I was showing that these charts are based on 1080p resolution, which makes games more and more CPU bound. 1080p is the new 720p. Previously 720p, and below, was the testing resolution that made games CPU bound, but now 1080p does this as well. They test at lower resolutions to show the differences between the CPUs. Testing at higher resolutions makes the games more and more GPU bound, and thus less and less difference between the CPUs. Unless a game is extremely CPU intensive, but this tends to be older games that typically don't scale well and are not multi-threaded or designed to make use of newer processors with more cores.

So, my point is that, this is Steam. Steam is for gaming. When playing at higher resolution, like 1440p, 4K, and some Ultrawide resolutions, more and more you will become GPU bound. And more so the higher resolutions you go. So, that is why I always say to get yourself a good CPU, and then spend twice as much on the GPU. The GPU is what will determine your overall performance in games. So, midrange CPUs, and CPUs like the 9900K, will give you just as good performance as the newer more expensive CPUs when paired with a modern GPU and playing at higher resolutions.
Last edited by ZeekAncient; Sep 19, 2022 @ 5:56pm
ZeekAncient Sep 19, 2022 @ 6:12pm 
Originally posted by Guydodge:
9900k is still very much in the running no need to upgrade.if you have a 3090 and above
you could overclock it if you want but its not even necessary.
maybe with the 4000 series but from what ive seen they are only going to run about
20% faster so 4k will still be hard to run.but at least you'll probably be able to hold 60fps
on high/ultra so thats a plus.

I think the 40 series will be better than just 20% better than the 30 series. At least, from what I am seeing from early leaks. But of course, it is all speculation until we see actual benchmarks and hardware.

And 4K is not as hard to run as you think. I know you are always claiming this, and many times I refute this, and then delete my comments, lol, because I don't want to be in an argument, but I have been playing at 4K for a couple years, and I have no issue maintaining 60FPS on high/ultra in every single game that I play. And this is with a 3070 Ti.

First, not every game is super demanding. Second, I don't play Cyberpunk, lol, so that might be an issue. But I am sure with tweaked settings, and DLSS, I would be able to maintain 60FPS with my 3070 Ti in Cyberpunk. But like I said, I am able to maintain a constant 60FPS in every single game that I play, with settings pretty much maxed, in all games that I own. So, personally, I just think this notion that 4K is so demanding, even in 2022, is overblown.

Like I said, not every game is super demanding. But even in my most demanding games, I am still able to maintain a constant 60 FPS in all my games. With a 3070 Ti. Currently I am in the market for a new display. I want OLED. Burn-in risk and all. They are just too sweet. And am pretty sure I will stay at 4K. Unless I get one of these new OLED Ultrawide screens that have 3440 x 1440 res. And with my new display, my target might become 120FFPS, as it will have 120Hz, or more, refresh rate.

So, while my 3070 Ti is capable of 4K/60, it will not be capable of 4K/120. Which is why I will probably be upgrading to a next gen GPU. Hoping that the 4090, or 4080, or maybe even the RX7900, or whatever, will be capable of 4K/120. Even if not in every game, I have been having a glorious experience running 4K/60 with my 3070 Ti, so even if I just get 60 or higher in 4K with the newer GPUs, it will still be a great experience. I love the higher fidelity that 4K brings. Especially in 2022. Pair that with a fast OLED screen with those outstanding blacks and HDR. Sounds fantastic.

BTW, I went from 1920 x 1080 to 2560 x 1440 to 3440 x 1440 to 3840 x 2160. I personally love the extra clarity and screen real estate that 4K brings. It would be very hard from me to go down to a lower resolution not to mention something like a 34" or 27" screen. I think it has to be at least 48"(maybe 42) for me to equal the gaming experience that I have now. And looking at these new displays coming out, they are all OLED, either 4K or 3440 x 1440, and they are all big screens.
Last edited by ZeekAncient; Sep 19, 2022 @ 6:24pm
Tiberius Sep 19, 2022 @ 8:16pm 
Originally posted by Soulreaver:
Originally posted by r.linder:
Nah it's still pretty accurate, if not easier to run.

When I tested 1080p with Psycho settings, I was getting minimum 80~90 FPS in the middle of the city near Jig Jig Street, normally at 1440p with the same settings the range in the same area the range is 60~70 FPS at best with the same settings. This is all with DLSS on Balanced, not Quality.

An overclocked i9-9900K should have no problems handling that. It's not as considerable of a bottleneck as my old R9 3900X, which gained pretty much nothing when going down from 1440p to 1080p, and that was after Patch 1.3. My 3900X was a garbage bin chip that couldn't even do better than 4.1.

I won't continue participation in this. I understand you are not of ill intent but I definetly won't go over countless charts and results of yet around 8hrs of testing and another 4hrs of watching Youtube and reading reddit etc.

You may have your own findings and research. Maybe those are better. But after around 12hrs of research and testing I stand by mine.

Have a good one.

I've personally tested 5950x vs 9900k and i saw absolutely no difference in cp2077 performance
Last edited by Tiberius; Sep 19, 2022 @ 8:16pm
< >
Showing 16-30 of 31 comments
Per page: 1530 50

Date Posted: Sep 18, 2022 @ 10:34pm
Posts: 31