C1REX May 2, 2023 @ 8:39am
I wasn't aware about Nvidia's CPU drivers overhead and it's shocking how bad it is.
I've recently learnt that NVIDIA drivers put extra load on CPU when AMD do more on GPU.
That means that all benchmarks are only valid if you have the absolutely best CPU on the market.
The weaker your CPU the worse it gets for nvidia. This applies even to CPUs that are two years old, let alone the still highly popular Ryzen 3600.
While this has been an issue in the past, it has recently escalated into a catastrophe for Nvidia.
Beware.

Something went wrong while displaying this content. Refresh

Error Reference: Community_9745725_
Loading CSS chunk 7561 failed.
(error: https://community.fastly.steamstatic.com/public/css/applications/community/communityawardsapp.css?contenthash=789dd1fbdb6c6b5c773d)
Showing 1-15 of 15 comments
Skkooomer Lord May 2, 2023 @ 8:41am 
Been that way for a long time and it is why sometimes the AMD cards are so far ahead in weird titles like MW2.
C1REX May 2, 2023 @ 8:57am 
Originally posted by Saturn in Capricorn:
Been that way for a long time and it is why sometimes the AMD cards are so far ahead in weird titles like MW2.
It also wins at lower resolution in Jedi Survivor.
The thing is it can happen in any game if you are CPU limited because you are still playing on a Ryzen 3600 or similar.
plat May 2, 2023 @ 9:30am 
Even if I'd seen this video prior to replacing my old gtx 1080, I don't think I would have heeded the warning.

This is one of those "now you tell us" ,kinds of things. We want new, fresh stuff but don't want to hear about the down-side.

This is exactly my situation now but I actually don't play demanding games, maybe semi sometimes. In any case, I am seriously cpu-bound. i9 9900/rtx4070. cpu-100% gpu--60%. :steamsad:
Wichtelman May 2, 2023 @ 9:35am 
Originally posted by plat:
Even if I'd seen this video prior to replacing my old gtx 1080, I don't think I would have heeded the warning.

This is one of those "now you tell us" ,kinds of things. We want new, fresh stuff but don't want to hear about the down-side.

This is exactly my situation now but I actually don't play demanding games, maybe semi sometimes. In any case, I am seriously cpu-bound. i9 9900/rtx4070. cpu-100% gpu--60%. :steamsad:

i really doubt the i9-9900k gets maxed by non demanding games...

Last edited by Wichtelman; May 2, 2023 @ 9:45am
C1REX May 2, 2023 @ 9:39am 
Originally posted by plat:
Even if I'd seen this video prior to replacing my old gtx 1080, I don't think I would have heeded the warning.

This is one of those "now you tell us" ,kinds of things. We want new, fresh stuff but don't want to hear about the down-side.

This is exactly my situation now but I actually don't play demanding games, maybe semi sometimes. In any case, I am seriously cpu-bound. i9 9900/rtx4070. cpu-100% gpu--60%. :steamsad:

People should be warned. I almost made a similar mistake with my Ryzen 3950 and upgrade to nvidia 4090 or 4080.
So you either upgrade the whole PC with NVIDIA or go AMD if you want just the GPU.
Originally posted by plat:
Even if I'd seen this video prior to replacing my old gtx 1080, I don't think I would have heeded the warning.

This is one of those "now you tell us" ,kinds of things. We want new, fresh stuff but don't want to hear about the down-side.

This is exactly my situation now but I actually don't play demanding games, maybe semi sometimes. In any case, I am seriously cpu-bound. i9 9900/rtx4070. cpu-100% gpu--60%. :steamsad:
Well I don't see the problem. The RTX 4070 is, what, around twice as fast as the GTX 1080? So if you're getting 60% utilization, that means you were GPU bottlenecked before, but barely. But then you claim you don't play visually demanding games, so this sounds preferable. You have the performance headroom for when you need it. You're just not going to be using it when it's not needed, and I don't see the problem there.

I wish people wouldn't stress so much over utilization numbers like there's some balance they should be at. PCs don't work that way. Just enjoy the performance you have, and upgrade if you want more. Looking at utilization numbers is really only important if you're seeking an upgrade and need to best find out what part will bring the performance improvement you seek.

And after all, I'm presuming you upgraded from the GTX 1080 because you wanted more performance and determined the GPU was indeed responsible for what was limiting your performance before.

You're always going to have a bottleneck. Who cares where it is? Is the performance you're getting to a level you're happy with? If so, there's no problems. If not, upgrade the CPU next. But if you're only doing it to strike some balance that doesn't exist, be prepared to be stressing over nothing as you constantly make hardware changes to chase something that doesn't exist.

TL;DR: Don't worry and create problems for yourself, be happy. If your new hardware is providing you with a level of performance you find acceptable then just enjoy it. There's always going to be shortcomings to almost everything. If it's not affecting you then don't let it rain on your own parade.
plat May 2, 2023 @ 10:04am 
You're 100% on the money, Illusion of Progress.

But in isolation, when you get that bottleneck info, it's kind of a jolt. But in this case, not a big deal. I mean: the most "demanding" game in my library right now is "Stray" for heaven's sake. But yes, you always want that room to expand and that's my intention here. :steamhappy:
Gökyüzü May 2, 2023 @ 10:45am 
I have zotac rtx 4070 ti amp extreme airo with ryzen 5 5500 so i know what a bottleneck is.
I dont realy have any problem but msi afterburner allways running. Not because of overclocking but rather just watching my cpu and gpu temps so if there is anything wrong i can intervene before something bad happens. Im using this system about a 2 moths now.

About 2 weeks ago i saw my cpu some how manage to get 600w momentarly i was playing Division 2 i havent realy saw any performance issue. I mean yea i constantly crash at Division 2 just like everybody else thats how i realize when i return to windows. I checked termals my pump and everything seems ok. I clean it up and move on. It never did happend again but i still dont know if that was an error from msi afterburner or it was real. If it is realy and if it does that ♥♥♥♥ more then 10 second my whole system could fried. Its kind a worrying.
Originally posted by plat:
You're 100% on the money, Illusion of Progress.

But in isolation, when you get that bottleneck info, it's kind of a jolt. But in this case, not a big deal. I mean: the most "demanding" game in my library right now is "Stray" for heaven's sake. But yes, you always want that room to expand and that's my intention here. :steamhappy:
And... now I want to play Stray again because you mentioned it. I still wanted to get a few more achievements on it, like the time rush ones.

Stray really impressed me (both in gameplay and performance). I can't remember offhand the exact settings I was playing at but my GTX 1060 handled it very well (in my case, this just means staying near or at 60 FPS at 1920 x 1200, which it did). In particular, I was shocked at the low VRAM use numbers I saw for how pretty it is. I remember before it came out worrying if I'd be able to play at higher settings based on the pictures alone. It looks like it would be a bit more demanding. I wish every game ran as well as it.

If that's actually the most demanding thing you do, I would not at all be surprised to something that's, what, three and a half to four and half times my GPU not being fully utilized in that game, let alone lesser demanding ones.

So I wouldn't get hung up on utilization numbers if it's giving you performance you're happy. With the way the PC gaming market is going, games are going to need it eventually. I'm worried about Silent Hill 2. That's going to be the "big" game that I'll need to have upgraded by time it comes.
Originally posted by Gökyüzü:
About 2 weeks ago i saw my cpu some how manage to get 600w momentarly
I would take that is an error/false reading. I'd imagine that is near impossible.
UserNotFound May 2, 2023 @ 9:34pm 
I think nVidia had an advantage in DX11 games previously, due to some way the driver and scheduling worked, I think. AMD drivers, OTOH, has an advantage in DX12 because it has an advantage in scheduling, or something like that, kinda vague memory of this as it was a while back that I'd read about it.

AMD used to be behind when using a weaker CPU in DX11 games, due to a similar overhead issue, likewise, that's the case now for nVidia drivers in DX12. But this should not matter as long as you have a powerful enough CPU....
eqalidan May 2, 2023 @ 10:01pm 
Originally posted by Wichtelman:
Originally posted by plat:
Even if I'd seen this video prior to replacing my old gtx 1080, I don't think I would have heeded the warning.

This is one of those "now you tell us" ,kinds of things. We want new, fresh stuff but don't want to hear about the down-side.

This is exactly my situation now but I actually don't play demanding games, maybe semi sometimes. In any case, I am seriously cpu-bound. i9 9900/rtx4070. cpu-100% gpu--60%. :steamsad:

i really doubt the i9-9900k gets maxed by non demanding games...


I never really like those kinds of benchmarks as they are not exactly realistic, if you have a top end cpu and a 4090 you are going to be playing at 1440p minimum if not 4k, I kind of see these as more academic than real world.

also, this is another point, the ram speed on the 9000 vs the 13000, when you are pushing higher fps, ram is actually a bottleneck again, years ago ram was just kind of if you have the space thats all that matters, but we got to a point that ram can change fps by quite a bit.



Originally posted by UserNotFound:
I think nVidia had an advantage in DX11 games previously, due to some way the driver and scheduling worked, I think. AMD drivers, OTOH, has an advantage in DX12 because it has an advantage in scheduling, or something like that, kinda vague memory of this as it was a while back that I'd read about it.

AMD used to be behind when using a weaker CPU in DX11 games, due to a similar overhead issue, likewise, that's the case now for nVidia drivers in DX12. But this should not matter as long as you have a powerful enough CPU....

If I remember correctly, for a long time amds primitive stuff, essentially the first line in the gpu pipeline was only capable of 4, while nvidia was capable of 10-12 making it less of a bottleneck area, its why amd has stupidly powerful gpus on paper but it never translated, and this caused quite alot of driver overhead for them, granted I think this issue has been solved with rdna.

mantle is amds api for low level hardware access, and its what vulcan/dx12 are based on in part or entirely, this put nvidia on the backfoot while amd was already ready

you then had, not sure currently, amd having a hardware scheduler which when utilized was VERY good, but nvidia using a software scheduler which was flexible. this saw nvidia able to put out 'game ready drivers' with ease, but this also saw nvidia use more cpu power as a result, but given games were single or 2 core for so damn long, this was never an issue. however now that games are kind of utilizing more threads, it is a bit of an issue now.
Heretic May 2, 2023 @ 10:34pm 
Current specs:
R7 3700x
32gb 3600mhz CL16
RTX 3080 12GB
3440x1440 144hz display

The CPU can be a limiting factor, but I'm rarely under utilising the GPU. I am considering the 5800x3d (as prices fall) but only to reduce the impact in VR which is more CPU bound (extra layers of software).

Obviously if I was pushing a lower resolution then my system would be very unbalanced.

However. with the trash that is being released as games today I'm rapidly losing interest in the platform. Go back a few years and I would have brought that 5800x3d by now. Instead, I'm spending my money on better things.
_I_ May 2, 2023 @ 11:02pm 
at high fps or refresh rates, the cpu is often the limiting factor

if the cpu cant keep up preparing frames for the gpu to draw, raise gpu settings, res to give the gpu more work, or get a faster cpu that can handle higher draw rate

most games are limited by cpu single core performance, cpu not at 100% can still be limited by cpu
Wichtelman May 2, 2023 @ 11:39pm 
Originally posted by eqalidan:
Originally posted by Wichtelman:

i really doubt the i9-9900k gets maxed by non demanding games...


I never really like those kinds of benchmarks as they are not exactly realistic, if you have a top end cpu and a 4090 you are going to be playing at 1440p minimum if not 4k, I kind of see these as more academic than real world.

also, this is another point, the ram speed on the 9000 vs the 13000, when you are pushing higher fps, ram is actually a bottleneck again, years ago ram was just kind of if you have the space thats all that matters, but we got to a point that ram can change fps by quite a bit.



Originally posted by UserNotFound:
I think nVidia had an advantage in DX11 games previously, due to some way the driver and scheduling worked, I think. AMD drivers, OTOH, has an advantage in DX12 because it has an advantage in scheduling, or something like that, kinda vague memory of this as it was a while back that I'd read about it.

AMD used to be behind when using a weaker CPU in DX11 games, due to a similar overhead issue, likewise, that's the case now for nVidia drivers in DX12. But this should not matter as long as you have a powerful enough CPU....

If I remember correctly, for a long time amds primitive stuff, essentially the first line in the gpu pipeline was only capable of 4, while nvidia was capable of 10-12 making it less of a bottleneck area, its why amd has stupidly powerful gpus on paper but it never translated, and this caused quite alot of driver overhead for them, granted I think this issue has been solved with rdna.

mantle is amds api for low level hardware access, and its what vulcan/dx12 are based on in part or entirely, this put nvidia on the backfoot while amd was already ready

you then had, not sure currently, amd having a hardware scheduler which when utilized was VERY good, but nvidia using a software scheduler which was flexible. this saw nvidia able to put out 'game ready drivers' with ease, but this also saw nvidia use more cpu power as a result, but given games were single or 2 core for so damn long, this was never an issue. however now that games are kind of utilizing more threads, it is a bit of an issue now.

why does that matter if you want to show cpu utilisation which is best shown if you remove a gpu bottleneck?
eqalidan May 4, 2023 @ 9:19am 
Originally posted by Wichtelman:
Originally posted by eqalidan:

I never really like those kinds of benchmarks as they are not exactly realistic, if you have a top end cpu and a 4090 you are going to be playing at 1440p minimum if not 4k, I kind of see these as more academic than real world.

also, this is another point, the ram speed on the 9000 vs the 13000, when you are pushing higher fps, ram is actually a bottleneck again, years ago ram was just kind of if you have the space thats all that matters, but we got to a point that ram can change fps by quite a bit.





If I remember correctly, for a long time amds primitive stuff, essentially the first line in the gpu pipeline was only capable of 4, while nvidia was capable of 10-12 making it less of a bottleneck area, its why amd has stupidly powerful gpus on paper but it never translated, and this caused quite alot of driver overhead for them, granted I think this issue has been solved with rdna.

mantle is amds api for low level hardware access, and its what vulcan/dx12 are based on in part or entirely, this put nvidia on the backfoot while amd was already ready

you then had, not sure currently, amd having a hardware scheduler which when utilized was VERY good, but nvidia using a software scheduler which was flexible. this saw nvidia able to put out 'game ready drivers' with ease, but this also saw nvidia use more cpu power as a result, but given games were single or 2 core for so damn long, this was never an issue. however now that games are kind of utilizing more threads, it is a bit of an issue now.

why does that matter if you want to show cpu utilisation which is best shown if you remove a gpu bottleneck?

because it's pure academics, im more interested in what the game is actually going to perform like.

let me put it this way, the difference between 60 and 75 is a massive jump, again at 75-90 and 90-120 then a lesser but still notable from 120 to 144, the difference between 144 and 165/185 is negligible, same with 240, in all honesty, once you hit 144 you gain more form getting a monitor that ACTUALLY shifts the pixels at any color or brightness in under 7ms than you do by chasseing a higher refresh number

my intrest in this purely ends after 120/144 fps, and the fact people with 2000$ tied up in a cpu and a gpu alone are not going to be using a 1080p monitor, this more paints a picture that makes the older cpu look worthless compared to the newer one rather than showing/telling anyone is it even worth the extra performance.

NOW that said, the number 1 bottleneck you want to avoid is a cpu one, because this is where you get inconstant frame times jumping in, that said im not sure if modern cpus are as affected by this given older cpus hitting a cpu bottleneck would be total system cpu at 100% rather than just a few cores at 100%
Last edited by eqalidan; May 4, 2023 @ 9:20am
Showing 1-15 of 15 comments
Per page: 1530 50

Date Posted: May 2, 2023 @ 8:39am
Posts: 15