Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
When we specify recommended core count we mean physical CPU cores, not hyperthreaded cores.
SMT has some weaknesses with how it caches memory across cores, which becomes especially apparent in a threading-intensive game like DW2.
However I would recommend caution when turning up CPU utilization in the game settings. 100% utilization means that the game really will use all of your cores, giving your CPU a real work-out. While in theory this should be fine, in practice some hardware can struggle when under sustained heavy load like this, especially if you are using overclocking.
Additionally, when CPU utilization is set to 100%, if you also constantly use x4 or x8 game speed the .Net garbage collection process may struggle to keep up with clearing unused memory.
I constantly an using x8 speed. What utilization should i set it to that would allow the .net garbage collection to keep up with clearing unused memory?
Before i disabled amd SMT today, i was playing on 2,000 stars and task manager said i was at 35%. And the pc was chugging. Only using 12 to 16 Gigs of my 64GB memory.
But disabling smt, it ran much smoother. I didn’t check cpu utilization when turned off.
Anyway, this game has made me decide its time for a CPU upgrade. I’ll be getting a 9950x3D in January.
Running MSI Afterburner and RTSS OSD + log dumping, even on biggest 2000 star, 10x10, me + 10 AI empires into late mid-game, early end-game -->
1- GPU is a joke, barely uses it: 42C avg (37 min, 54 max) // 54% avg utilization, min 33 max 145 watts (*on a 4090 w/ no undervolting)
2- CPU is real work horse: 64C avg (min 37 max 77) // 52% avg utilization @ 153W avg (197 max - which is still lower than my bios set P1/P2 @ 253W)
*360MM AIO so prob why not hitting 70C+
3- RAM really useful, more the better: barely mid-game with 12 AI empires encountered + at least 1 not seen = ~29GB ram used. The start screen told me needed 12 core, 48GB - so am finally glad future proofed by building this PC w/ 64GB
4- FPS is zero issue and there is no min/max, 100% of logged game is on the dot 165fps, which is what I set my 240hz gsync monitor to cap at via NVCP.
TLDR: For me, on 24 core CPU - the sweet spot after some experimentation is 70%, as that leaves ~1/3 of my cores and since they are e-cores as rough guess ~20-25% processing power left for my side apps/discord, etc
I just used it as per your recommendation. Set it to highest priority, all cores, game ended up utilizing about 75% of CPU on average, and went as high as 81%. Also used 50% of my 64 gigs of ram.
Anyways, time to start a new game. Something went terribly wrong with the AI that game. I manually clicked 1000 times to build 1000 new colony ships and colonize 1000 worlds, but no colony ships were built, and my 6 colony ships already built were just hanging about. I had plenty of resources and money, and 481 idle space stations.
I'm not going to send in a save. it's like 250 megabytes.
I use a Ryzen 9 3900 with 12 cores.
you mention game utilisation. Is that in the game settings?
the issue i have it is that it takes ages to load a game. any help to make loads faster?
Yes, CPU utilisation. It's a percentage value. So if you have have 12 cores and want to use 10 for the game and leave 2 for the gfx driver and background tasks, (100/12)*10=83.
Tarkov has had this "use only physical cores" setting forever. Basically SMT gives a small maybe 20% total boost but the individual threads have to contend for the memory and all thus run slower.
Any reason to run this game at such a high fps other than you can? I capped to 90 because anything higher is unnoticeable to me since its such a slow paced game. Mainly just for smoother scrolling.
That's exactly my philosophy too, so zero disagreements (why bother running fps higher when no benefit/noticeable).
*If don't want to read the boring details, the TLDR early = I'm lazy and just set global NVCP cap instead of fiddle per program/game over-ride fps caps. Makes it ton easier to do each clean uninstall/reinstall of drivers, since clean install = all prior configs wiped.
** So all I do is reset ONE setting -> global fps cap to 165 (which is the sweet spot avg for my entire library of games. From DW2 where fps above 60-90 really doesn't matter, to my more demanding aRPG faster-action games.
Details: for different people there are different break points for where that 'not noticeable' is. And based on components owned, even some of the 'low' tier settings may seem 'high'.
e.g. fps cap 165 for a 4090/13900k system on most games is already an extremely low bar. Basically, even if I did set most games like DW2 down to say 60-90 fps cap, I'd see almost no difference in watts, temp, fan speed, etc. At least, not enough to matter - barely rounding error or very narrow single digits difference.
The 90% reason I choose to fps cap or throttle voltage etc is to get 'good enough' performance but only if there is some tangible benefits to lowering my hardware exerting itself:
a) actually see some benefits for that lower cap, like lower GPU wattage and temps, lower CPU wattage and temps, less fan noise since these first two are primary drivers to low vs high fan curves.
But for most PC components, there is a break point where going down much further doesn't give you all that much in 'savings'. So between my lazy man 1-setting-fits-all and not really seeing my wattage/temp going much lower, I just use 165 as sweet spot.
** the few times I do set a per program setting, is when I want to give an exception to a game. So instead of using NVCP's per program setting to set each game, I use the global for easy all games setting, then if I ever need to - give 1 or 2 games an exception, using the per program ability.
^^ good example of this is an MMO I play. Love/hate Elder Scrolls Online debate aside, the combat mechanics there are extremely fine tuned for a light attack weaving player, where the input lag savings is milliseconds is and can be very noticeable. If you've no idea what I'm talking about, that's very normal.
If you do play ESO at a high level though, than you'll know that min/max parsing on the 21mil raid atro dummy is a way of life for some players. And fair or unfair, its just fact of life in that game that lower input lag = better combat/weaving/light ratio performance. And LA's are still ~20% of the overall char dps. It's why there is such a huge delta between floor and ceiling of skill level and dmg potential that players exhbit.