Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Example game: 7 Days To Die
Stock: Game can stutter a lot, low FPS
Affinity limited to 4 cores on CCD 1: No stuttering, proper FPS
Since it's using less cores and task scheduler isn't bouncing what cores are being used around.
Stuttering/low FPS depends on the game and how it handles Ryzen. For games that it does seem to perform like a Core 2 Duo, limiting affinity to CCD1 cores or using Ryzen Master's Game Mode preset (disabling CCD2) will fix performance.
It's fine, but you don't limit it to too few cores, because the games can become unstable and crash. I would stick to 4(~6 cores with newer titles).
Also it has nothing to do with the core count with potential performance issues, Ryzen 9's multiple chiplets has a latency between them that occurs when two or more cores from both chiplets are being used for the application. Depending on the application, the latency can reduce performance in that instance.
each bank/module of cores has seperate cache, saving time/cycles swapping cache between banks
if the cpu has 12 cores (4 bank)
but if its a 6 core (non fx) with smt/ht it wont make a difference
start 'program path/name' /affinity 0x###
to set cores on launch
https://www.tenforums.com/performance-maintenance/47607-affinity-command-cmd.html
easiest to use a calc to convert core values from binary to decimal/hex
0= off, 1 = on
starting with highest core (11,10,9,8,7,6,5,4,3,2,1,0)
0 0 0 0 1 0 1 0 1 0 1 0 (cores 1,3,5,7 = 170 d = 0xAA h)
1 0 1 0 1 0 1 0 0 0 0 0 (cores 5,7,9,11 = 2720 d = 0xAA0 h)
create a new shortcut for the program
edit the target line to
start "xxxxx" /affinity 0xAA
when that shortcut is launched, it will start the program with affinity set to those cores
You can see hundreds of FPS on a 60hz monitor, though they'll be partial frames.
The monitor however would only update the picture 60 times a second.
But, to answer your main question;
You wouldn't impact the CPU life span at all.
The CPU still warms up and cools down, thus is getting damage (not anything to worry about.)
By changing what cores it happens on won't do anything.
Though, it is good for lowering temps because you're not spreading the load over more cores, so you're pulling less power, which produces less heat. So the only thing you're really doing is making your system quieter, and improving performance (if it's on the same CCD, and boosts higher than all the cores, which it will.)
But unless the cooler is too inefficient even to handle a throttled chip, it'll never be out of the safe range for long. So temperature has almost nothing to do with it.
You can make a case for overclocking, but that's completely out of manufacturer spec and more of the fault of the user for having a bad overclock.
I would limit a game to one chiplet anyway. First gen intels (ix-xxx cpus) used a similar design to the amd 3900x. That created memory latency in a similar way to how infinity fabric does. So running on one chiplet should be more efficient and reduce temps. Non-cached data i/o would still need to run via the infinity fabric which would slow things down but not as bad as having multiple cpu caches.