Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Maybe 45% CPU utilization at 58c?
GPU runs at 97% utilization at 70c but I have an SFF case so yeah..
Seems to be working alright, no complaints here.
The CPU needs to feed the data to the GPU. The GPU just renders then.
Many things just can't be done on the GPU, and some are difficult to do, or require proprietary solutions. The more that is happening in a scene, the more the CPU has to calculate.
Just take Factorio as an example. It's a 2D Sprite game, runs on any integrated GPU that came out in the past 15 years, but can absolutely max out the fastest CPUs, or even the ones with a lot of cores if there is too much going on in the game.
Or another example: Dwarf Fortress, the original game doesn't even have graphics but is just text based, and it can max out any modern CPU too.
So, tell me, are those games also badly made because they are mostly using the CPU?
But yeah, he obviously has no clue what he's talking about, and probably just has an older quad core intel CPU that is 6+ years old.
Nothing wrong with that, those are still usable CPUs, but the world has advanced since then, and Game Developers are always pushing boundries.
Remember, playing a fast paced shooter like quake on a Pentium III with a Voodoo 3 at 30FPS max was once considered very good.
Did you think I was disagreeing or something?
20+ years ago, 30FPS was considered smooth.
10 years ago 60FPS was considered smooth.
nowdays people are spoiled and want 120+ FPS
Which you can get, but it requires very good Parts, as it did 10 years ago to get 60 FPS at max settings, or 20+ years ago to get 30 FPS.
I had 90-100% CPU Usage and low 50-60% GPU Usage an what helped were settings in nvidia driver.
Threaded optimisation allows the driver to offload certain GPU-related processing tasks as separate thread(s) on available CPU cores. Fine in GPU bound scenarios, but a nightmare when the CPU is in 90-100% usage.
In nvidia control panel turning off threaded optimisation (from auto) and setting gpu power management to prefer maximum performance (from normal) decreased cpu utilisation by approximately 20% and increased gpu utilisation by around 15%.
Considering I was hitting around high 100% on the cpu, the improvement is noticeable with less stutter and hitching and generally more consistent and higher frame rates.
Give it a try.
Greetz
And yes, I understood your point. You're not that complicated lol
That was until last night when I discovered something when I was testing between DX 11-12 and purging the shader cache's for each test. There is your user config file located where the shader ache is found (a few folders before it).
Open the suer config file and scroll down or use Cntrl+F and type in "Async compute", and you will find it is set to true.
Now for me, I own a GTX 1080ti, and ever since Async Compute became a thing, it did nothing but hamper my overall performance. I have to root arounnd in some game suer configs, because some games these days do not outright give you the option to toggle it off in-game.
SO now I've had it toggled off since last night, and so far I've been able to reach my fps cap of 60fps without getting stupidly heavy dips in my FPS, but I'll still be doing more testing methodology as time goes on, but for now, I do know Async Compute for my card was a common issue (it has always been an issue for my card with every game having it enabled).
So for those of you running in DX 11 mode, go to your user config file and make sure Async Compute is set to false, not true.
Apart from that hurdle, I really am getting irked by the radio silence from the dev front when it comes to overall performance optimisations. Most games tend to have perf fixes dolled out within the first few weeks, but with HD2 is has now been a month and a bit, and we've had no word on perf improvements of any sort (excluding P2P server fixes).
Yeah people who post those Nvidia control panel "best settings" vids tend to forget how Threaded optimisation actually works and what it means for games where you're CPU bound. Most just flat out ignore mentioning what that can do in general, but they'll still say "leave it on". Same goes for Gamma correction and ppl claiming to force x16 AF, when you're basically asking the game to render AF twice in the game, which causes more of a load you really didn't need (if you need to force AF via the control panel, disable it in-game at the very least).
Also, like in my post above, people with older cards like mine really need to look out and see if Async compute is enabled, because with my 1080ti, it actually gives me worse perf than an uplift like it does for the latest models. Disabling it also helped for me.