Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
https://steamcommunity.com/app/993090/discussions/0/4338735599617229062/#c4338735599617677680
"I can cap all games at 55fps and EVERY game will run at 165fps now while making more use of my GPU instead of my CPU which has decreased my thermals massively.
Trying to run all game at native 165fps was having my 4080 laptop run at 80c pretty much nonstop with a even though I was using turbo limits and a solid af undervolt. Now running any game at max settings/165fps.. the laptop sits at 60c(CPU) and 55c(GPU).. so yeah. HUGE bloody difference. I think if you're on a mobile RTX 3000/4000 laptop this app is a must have for better thermals."
This isn't at all true when we're talking about frame gen. Interpolated frames will affect CPU usage in a very minor way in comparison to raw frame output. Your experience would be more CPU limited only in the case of pushing raw(non interpolated)frame-rates to the max of what your GPU/CPU can potentially output. For example. IF your CPU is only capable of pushing 120 fps in a game regardless of settings(this is your max frame threshold) .. bottleneck calculators are terrible at actually calculating this.
A good example is Darktide.. a fairly CPU limited game. An RTX 4080 with a 7800x3D is only capable of getting about just shy of 120fps in this game regardless of settings. If you lower settings all the way down and even with max up-scaling with prioprietary frame gen you will only every see maybe at best case a stable 90-120fps while sacrificing all visual fidelity for only a minor stability in frame pacing. However if you maxed all settings you would still see frame-rate in the stable 90-120 range. most often close to 120fps but would have more dips compared to settings lowered but not by any drastic degree so as to not even be worth it.
So lower settings netted you no real tangible difference.. This is because you are CPU limited and not GPU limited.. All you did by increasing graphic settings is increase GPU usage.. but your "performance wall" as they call it is really 90FPS. You can clearly only achieve 90fps across all situations max no matter the settings used. Now in the case of frame gen like LSFG this changes things. Since you're not going anywhere near your 'performance wall', you are in fact using less CPU by using it and more GPU but you will see an increase to CPU usage because when the GPU is used in any way so is the CPU.. since any call goes through the CPU first. But these are minor.
In my tests on my 13900HX RTX 4080 laptop.. as you can see in the example Gizzmoe linked, my CPU usage went right down because interpolated frames use FAR more GPU than CPU. My CPU usage went up a total of 6% vs 62% trying to run those frames natively. Lowering resolution had negligible difference. I could run 1080p to 6K DLDSR. It wouldn't matter that much because until my base 55fps becomes my 'performance wall' nothing changes.
In so many games 165fps was not achievable using proprietary methods like DLSS3 FG or FSR3 FG. LSFG was the only way to achieve 165fps target in 100% of all scenarios. I'm not a mathematician but using 24%(LSFG 2.1 55FPS Base(165fps)) CPU vs 78%(native 165fps but can't achieve.. frame rate capped at 147 in that particular game I was testing(Borderlands 3) means CPU is a LOT less used in the case of LSFG.
To sum things up the only way this uses more CPU to the degree that it threatens your targets is if base FPS = performance wall or worse. Otherwise using this will reduce usage on CPU not increase it vs native frame rate. This is a verifiable fact. Not sure why you're saying this will increase CPU usage vs native rate. It flat out will not.
So this is partially true, but xavvy provides the other 70% of the explanation: it is limiting in the sense that you force the GPU to do more work with the same level of CPU usage, thus ' shifting' the bottleneck away from the CPU to some extent. But it doesn't change anything on the CPU side aside from some additional processing needed for controlling LSFG,.