S.T.A.L.K.E.R. 2: Heart of Chornobyl

S.T.A.L.K.E.R. 2: Heart of Chornobyl

Same FPS on All Settings + Stutters with FSR 3
I'm experiencing a frustrating issue where my game runs with almost the same FPS (50-60) and frame times (~20ms) on all settings—whether set to low or epic. Tuning the graphics settings doesn’t make any difference, and enabling/disabling DLSS doesn’t help either.
Furthermore, if I enable FSR 3, the game becomes completely unplayable, stuttering to 0 FPS for several seconds at random SHORT intervals (once per 20 seconds).
Here’s my setup:
  • CPU: Intel Core i7-13700K
  • GPU: Asus GeForce RTX 3060
  • RAM: 32 GB Corsair Vengeance DDR5-6000
Other details:
  • V-Sync is OFF.
  • DLSS ON/OFF/Quality/Balance/Perfomance/Ultra-Perfomance makes no difference in performance.
  • CPU usage: Threads aren’t maxed out, with usage ranging from 13% to 55% across all 24 threads. (CPU 1–24, %): 42, 15, 49, 13, 36, 31, 36, 26, 55, 35, 47, 41, 34, 25, 47, 21, 31, 28, 27, 27, 30, 28, 28, 28
  • RAM usage: Stable with no sudden spikes or drops throughout.
However, the GPU consistently runs at 97% load. This doesn’t seem like a CPU bottleneck or memory leak.


Has anyone else run into this issue? Any tips or fixes would be appreciated. Thanks!
< >
Сообщения 115 из 20
This game is a massive VRAM hog and it's constantly paging out VRAM to RAM. The less VRAM you have, the more it pages. This seems an intended consequence of using UE5's Nanite to render high poly meshes used throughout the game. In the first town for example, it reserves around 17gb to 20gb of VRAM with the majority of it being nanite buffer data at ~15gb.

Other than getting a new GPU with more VRAM (ideally >20gb), you could try checking if your motherboard is nerfing your CPU memory clock speeds. But I could be entirely wrong about that for your PC. You can check with AIDA64 Extreme Trial version and running the memory benchmark. Your RAM from a technical standpoint should be able to read around 48,000 MB/s . OR, you can check your bios settings to check your memory clock speed there.

I found out my motherboard was nerfing my memory clock speed and increasing it in the BIOS to spec doubles my framerate.
Автор сообщения: Steve
Your RAM from a technical standpoint should be able to read around 48,000 MB/s .

I ran the AIDA64 memory benchmark, and here are my results:
  • Read: 72,571 MB/s
  • Write: 67,952 MB/s
  • Copy: 67,135 MB/s
  • Latency: 91.9 ns




Автор сообщения: nATpiApX-BaCYA
i9 9900k
2080 ti
SSD m2
32 ddr 3600 Mhz
settings high/epic 50-60 fps, after patch get stable 80-110
Please check this https://steamcommunity.com/sharedfiles/filedetails/?id=3370503371

(Not) Thanks for promoting your FPS improvement guide :seriousboss: , but my post isn’t about “low FPS for my hardware.” My issue is that I get the same performance on all settings, even when using DLSS Ultra-Performance, which turns the game into a blurry mess.

The problem here isn’t just low FPS—it’s the lack of any scaling based on graphics settings. Please stay on topic. :potionomicsroxanne:
Автор сообщения: Grimm
Read: 72,571 MB/s
Write: 67,952 MB/s
Copy: 67,135 MB/s
Latency: 91.9 ns

So I thought memory speed might be the issue, but nah - my read speed's hitting around 28,000 MB/s and I'm getting <20ms in most areas now with some occasional stutter, so that's not the bottleneck.

Instead of my speculating of what the issue could be, I'll share some interesting things I've found while analyzing GPU captures at different scalability settings.

After messing around with some GPU captures in Zalissya (easily the most demanding spot in the lesser zone), I noticed something interesting. Even with graphics on low and resolution scaled down to 66%, this beast still eats up 12-13 GB of VRAM. Pretty wild. This will cause constant paging for my card with 12gb of VRAM.

Even on low settings, the game keeps those clouds and fog effects running. Takes about 2.5ms on my 3080 Ti 12GB to handle that. Not a huge deal since it's probably running alongside other GPU stuff, but it might be rough on other cards. Plus, you know how GPU drivers can be.

The foliage is another story. Game doesn't care what settings you're on - those trees and plants are staying. Sure, there's distance culling, but there's so much vegetation that your GPU's still putting in work.

It's doing a bunch of radiance caching for areas you don't even see.
It's fetching virtual textures you don't see.
It's drawing a visibility buffer for nanite.
It's doing expensive GI on everything no matter the setting (could be ray tracing? I haven't figure that out yet, it could just be fetching pre-rendered volumes).
Every NPC is getting skinned/drawn if you look in their general direction, even if you can't see them behind a wall. All 1M+ vertices of them.

Materials are all over the place with hundreds and thousands of different pipelines being orchestrated on the CPU/GPU. No wonder people like DigitalFoundry think it's CPU bound... The CPU is technically doing a lot of work there, but it's spending time sending work to the GPU!

Also, weirdly enough, it looks like it's running AMD FSR's frame scaling even though I'm using DLSS? Either way, that looks unintended..

Virtual shadow mapping is again left on no matter the scalability setting which spends several milliseconds on just that... I could go on and on with every effect they leave on no matter the setting.

The game's constantly streaming textures and geometry - straight up devouring memory like it's nothing. Games have always had memory issues, but this one's pushing it to a whole new level.

If you're really interested, you can take a GPU capture with renderdoc and see what your performance counters look like. But I really only recommend it if you're familiar with graphics APIs like D3D12 and you can afford another GPU if it breaks.

I'm just cringing looking at the GPU results right now. Just look at the statistics summary for standing inside Warlock's bar in Zalissya:

Draw calls: 38750 Dispatch calls: 6983 API calls: 314525 API:Draw/Dispatch call ratio: 6.87742 516 Textures - 1201.30 MB (1199.41 MB over 32x32), 131 RTs - 392.68 MB. Avg. tex dimension: 576.242x442.255 (744.426x570.924 over 32x32) 4775 Buffers - 11312.55 MB total 988.00 MB IBs 996.73 MB VBs. 12906.53 MB - Grand total GPU buffer + texture load.

Long story short, I really don't think they invested enough time into a proper scalability/optimization pass in this game. It's like they adjusted the memory usage of each effect according to the settings and called it a day.
Nothing useful from me, just want to say I have the same thing. Tried different settings with and without mods, same fps, difference is +-10 . Also cant get higher than 70-90fps.
3080 i7-9700kf 5ghz 32ram 3200hz m.2ssd
Автор сообщения: Grimm
I'm experiencing a frustrating issue where my game runs with almost the same FPS (50-60) and frame times (~20ms) on all settings—whether set to low or epic. Tuning the graphics settings doesn’t make any difference, and enabling/disabling DLSS doesn’t help either.
Furthermore, if I enable FSR 3, the game becomes completely unplayable, stuttering to 0 FPS for several seconds at random SHORT intervals (once per 20 seconds).
Here’s my setup:
  • CPU: Intel Core i7-13700K
  • GPU: Asus GeForce RTX 3060
  • RAM: 32 GB Corsair Vengeance DDR5-6000
Other details:
  • V-Sync is OFF.
  • DLSS ON/OFF/Quality/Balance/Perfomance/Ultra-Perfomance makes no difference in performance.
  • CPU usage: Threads aren’t maxed out, with usage ranging from 13% to 55% across all 24 threads. (CPU 1–24, %): 42, 15, 49, 13, 36, 31, 36, 26, 55, 35, 47, 41, 34, 25, 47, 21, 31, 28, 27, 27, 30, 28, 28, 28
  • RAM usage: Stable with no sudden spikes or drops throughout.
However, the GPU consistently runs at 97% load. This doesn’t seem like a CPU bottleneck or memory leak.


Has anyone else run into this issue? Any tips or fixes would be appreciated. Thanks!
This is normal to what many others are experiencing. I have an RTX 3060 and an i7-9700 and get between 55-75fps at 2560x1440 with a FOV of 100 and no Vsync/Frame Limiting. I have most settings set to low because it increases stability. Some graphical objects aren't polished and will cause an FPS drain. My DLSS is set to Quality and my Sharpness is set to 100%. I have FSR3 enabled and it gives me extra FPS, maybe 3-5 more. FSR is an AMD technology that the game is simulating to your Intel/NVidia hardware. It could be that your hardware does not like FSR and it is causing you issues. I don't think this is the case; I think something else is causing your heavy stuttering.

Big stutters are generally a sign of CPU usage and when you go there you go beyond just the game and hardware; you get into Windows configurations.
As some have said, you could be running into VRAM limitations, but you could also be running into CPU limitations. This game is insanely hard on the CPU. My Threadripper 3960x was CPU limited at 52fps (without framegen)

To get the most out of Stalker 2 you need as fast of a CPU as you can get. (preferably a 9800x3d) and a video card with a minimum of 12GB of VRAM. This is almost regardless of the settings you play at. Even at 1080p, 8GB VRAM just doesn't seem to cut it (though you MIGHT be able to get it to work sort of OK but turning down some VRAM intensive settings)

As far as the CPU limit goes, settings aren't going to make a difference. There is a maximum frame rate you are going to get with any given CPU, and no matter what settings you change, that won't change. You can try enabling frame gen, and while you might get higher framerate on screen, tit will still feel laggy like playing a 50fps game.

Just look at this roundup of CPU benchmarks the german site PC Games Hardware did:

https://www.pcgameshardware.de/Stalker-2-Spiel-34596/Specials/Release-Test-Review-Steam-Benchmarks-Day-0-Patch-1459736/6/

The fastest tested CPU is the 9800x3d, and even it only achieves a 98.2FPS average. That would be pretty playable if not for the miserably low 0.2 and 0.1% minimums, meaning that even on the best gaming CPU out there right now, there are going to be stutters, and that is unavoidable unless they patch something.

Your 13700k achieves about a 81.2FPS average, which also normally would be OK, but look at those minimums. Yikes! You are going to see frame time spikes and stutters.

If I understand their methodology correctly (sorry, my German is a little rusty, it has been many years) for their 0.2% and 0.1% lows, they are measuring the worst 0.2% and 0.1% frame times and then expressing those in FPS (by doing 1/frame time in seconds)

So, for the 13700K they are saying 0.2% lows are 39fps and 0.1% lows are 33fps. This is equivalent of max frame times of 25.6ms and 30.3ms respectively.

The performance you are seeing might just be the best you can get with a 13700k. But don't feel too badly. No one is getting much better results. Even whose who have a 9800x3d with an average of 99.4FPS, their minimums are 51fps and 42fps respectively. This is 19.6ms and 23.4ms respectively.

I am not quite sure what Stalker2 is doing differently from just about every game out there (except maybe Starfield, which also had pretty bad CPU performance, but not this bad) but the truth is, no CPU on the market today gets what I would consider "truly playable" framerates. In other words, 0.1% minimums of at least 60fps (or maximum frame times of 16.67ms)

Don't get me wrong. I would prefer more than that, but that is what I consider absolute minimum for me to be happy, and nothing out there can do it right now.

They have been talking about using Nvidias DLSS frame gen to try to make up for this, and sure, it makes the screen rendering LOOK smoother, but you are still going to have laggy mouse response when you hit those frametime spikes.

I'm hopeful that over time, patching will help to improve the CPU performance in this title, because as is, this title is worse in this regard than any other I have ever seen. And don't get me wrong. I'm OK with a game being heavy on hardware, if I am getting something for that extra load. But looking at Stalker2 - other than the awesome Stalker-universe vibe I have wanted to revisit for 15 years - I don't see anything in this title that should justify that crazy CPU load. Other than maybe poorly optimized level design.

"Poor optimization" is usually the battle-cry of whiny kids with obsolete hardware who don't want to face the reality that their hardware is way past its prime, but in the case of Stalker2 it may actually be true.

My leading theory is that maybe - since the team was new to Unreal Engine - they didn't realize how closely you have to monitor to monitor your total draw call count, and are just spamming excessive draw calls. That would have this kind of effect on CPU load. I HOPE that isn't the case though, because if it is, it is it isn't a quick fix. It would require redoing the artwork of every single scene in the game, optimizing it for draw calls. I've never done this, but it sounds HIGHLY labor intensive to me. (unless you can figure out a way to have AI do it for you.)

So, yeah, I'm hoping I'm wrong on that one. But don't take my comments here as any kind of authority on the subject. I know my hardware, but when it comes to the software driver stack, API's and game engine interactions I am quite the layman.
Автор сообщения: Grimm
  • RAM: 32 GB Corsair Vengeance DDR5-6000

That doesnt mean much as Corsair is know to use difrent chips for exactly the same memory modules (same model/type/part number)
And since you did not provide Latency or a Partnumber the info is even more useless.

And Intel doesnt like Unreal Engine verry much in case no one already mentioned that.
Отредактировано WhiteSnake76; 4 фев в 13:11
Also, keep in mind, you don't have to have 100% CPU loads in order to be CPU limited, especially not on many-core CPU's or CPU's with a mix of E cores and P cores.
Автор сообщения: WhiteSnake76
Автор сообщения: Grimm
  • RAM: 32 GB Corsair Vengeance DDR5-6000

That doesnt mean much as Corsair is know to use difrent chips for exactly the same memory modules (same model/type/part number)
And since you did not provide Latency or a Partnumber the info is even more useless.

And Intel doesnt like Unreal Engine verry much in case no one already mentioned that.


Micron Technology manufactures my RAM, and here are the latency details:
Memory: 92 ns
  • L1 Cache: 1 ns
  • L2 Cache: 4 ns
  • L3 Cache: 17.4 ns




Автор сообщения: ZarathustraH
As some have said, you could be running into VRAM limitations, but you could also be running into CPU limitations. This game is insanely hard on the CPU. My Threadripper 3960x was CPU limited at 52fps (without framegen)

To get the most out of Stalker 2 you need as fast of a CPU as you can get. (preferably a 9800x3d) and a video card with a minimum of 12GB of VRAM. This is almost regardless of the settings you play at. Even at 1080p, 8GB VRAM just doesn't seem to cut it (though you MIGHT be able to get it to work sort of OK but turning down some VRAM intensive settings)

As far as the CPU limit goes, settings aren't going to make a difference. There is a maximum frame rate you are going to get with any given CPU, and no matter what settings you change, that won't change. You can try enabling frame gen, and while you might get higher framerate on screen, tit will still feel laggy like playing a 50fps game.

Just look at this roundup of CPU benchmarks the german site PC Games Hardware did:

https://www.pcgameshardware.de/Stalker-2-Spiel-34596/Specials/Release-Test-Review-Steam-Benchmarks-Day-0-Patch-1459736/6/

The fastest tested CPU is the 9800x3d, and even it only achieves a 98.2FPS average. That would be pretty playable if not for the miserably low 0.2 and 0.1% minimums, meaning that even on the best gaming CPU out there right now, there are going to be stutters, and that is unavoidable unless they patch something.

Your 13700k achieves about a 81.2FPS average, which also normally would be OK, but look at those minimums. Yikes! You are going to see frame time spikes and stutters.

If I understand their methodology correctly (sorry, my German is a little rusty, it has been many years) for their 0.2% and 0.1% lows, they are measuring the worst 0.2% and 0.1% frame times and then expressing those in FPS (by doing 1/frame time in seconds)

So, for the 13700K they are saying 0.2% lows are 39fps and 0.1% lows are 33fps. This is equivalent of max frame times of 25.6ms and 30.3ms respectively.

The performance you are seeing might just be the best you can get with a 13700k. But don't feel too badly. No one is getting much better results. Even whose who have a 9800x3d with an average of 99.4FPS, their minimums are 51fps and 42fps respectively. This is 19.6ms and 23.4ms respectively.

I am not quite sure what Stalker2 is doing differently from just about every game out there (except maybe Starfield, which also had pretty bad CPU performance, but not this bad) but the truth is, no CPU on the market today gets what I would consider "truly playable" framerates. In other words, 0.1% minimums of at least 60fps (or maximum frame times of 16.67ms)

Don't get me wrong. I would prefer more than that, but that is what I consider absolute minimum for me to be happy, and nothing out there can do it right now.

They have been talking about using Nvidias DLSS frame gen to try to make up for this, and sure, it makes the screen rendering LOOK smoother, but you are still going to have laggy mouse response when you hit those frametime spikes.

I'm hopeful that over time, patching will help to improve the CPU performance in this title, because as is, this title is worse in this regard than any other I have ever seen. And don't get me wrong. I'm OK with a game being heavy on hardware, if I am getting something for that extra load. But looking at Stalker2 - other than the awesome Stalker-universe vibe I have wanted to revisit for 15 years - I don't see anything in this title that should justify that crazy CPU load. Other than maybe poorly optimized level design.

"Poor optimization" is usually the battle-cry of whiny kids with obsolete hardware who don't want to face the reality that their hardware is way past its prime, but in the case of Stalker2 it may actually be true.

My leading theory is that maybe - since the team was new to Unreal Engine - they didn't realize how closely you have to monitor to monitor your total draw call count, and are just spamming excessive draw calls. That would have this kind of effect on CPU load. I HOPE that isn't the case though, because if it is, it is it isn't a quick fix. It would require redoing the artwork of every single scene in the game, optimizing it for draw calls. I've never done this, but it sounds HIGHLY labor intensive to me. (unless you can figure out a way to have AI do it for you.)

So, yeah, I'm hoping I'm wrong on that one. But don't take my comments here as any kind of authority on the subject. I know my hardware, but when it comes to the software driver stack, API's and game engine interactions I am quite the layman.


First of all, you seem to be missing the main point of my post. I’m not complaining about performance in general—I’m pointing out that graphics settings have zero impact on performance, even when I downgrade everything to a blurry mess. No matter what settings I use, the FPS (50–60) and frame times (~20ms) remain nearly identical, which suggests the game isn’t properly scaling its workload based on settings.

I understand that Stalker 2 is extremely demanding on both the CPU and VRAM, and the benchmarks you linked show that even top-tier hardware struggles to maintain smooth frame times. However, the issue I’m highlighting isn’t just that the game is demanding—it’s that adjusting settings doesn’t reduce the workload. Normally, lowering graphical settings should improve performance by reducing the complexity of effects and rendering processes, but in this case, the GPU remains at 97% load no matter what. That’s not just a CPU bottleneck—it suggests that certain rendering tasks aren’t being properly disabled or scaled down when settings are changed.

I don’t disagree that Stalker 2 is pushing hardware limits, and I get that even the best CPUs aren’t hitting ideal minimum frame times. But this isn’t just a case of a demanding game exposing system limitations—it looks like the settings aren’t properly adjusting workload distribution. That’s why the GPU is constantly maxed out and why enabling FSR 3 results in unplayable stutters. If the engine were working correctly, lowering settings should at least lessen the strain, but that’s simply not happening.

Steve’s analysis of GPU captures reinforces this. Even with low settings and resolution scaled down, the game still eats 12–13GB of VRAM, continues rendering fog, clouds, and radiance caching, and processes visibility buffers, virtual textures, and expensive global illumination as if it were running on higher settings. Foliage density doesn’t seem to change meaningfully, and NPCs are still being skinned and drawn even when not visible. It’s not just that the game is CPU-intensive; it’s that it keeps sending massive workloads to the GPU regardless of settings. This suggests that the game either lacks a proper optimization pass for scalability or that the settings sliders don’t actually disable the most expensive rendering tasks.

Автор сообщения: Steve
The foliage is another story. Game doesn't care what settings you're on - those trees and plants are staying. Sure, there's distance culling, but there's so much vegetation that your GPU's still putting in work.

It's doing a bunch of radiance caching for areas you don't even see.
It's fetching virtual textures you don't see.
It's drawing a visibility buffer for nanite.
It's doing expensive GI on everything no matter the setting (could be ray tracing? I haven't figure that out yet, it could just be fetching pre-rendered volumes).
Every NPC is getting skinned/drawn if you look in their general direction, even if you can't see them behind a wall. All 1M+ vertices of them.

Materials are all over the place with hundreds and thousands of different pipelines being orchestrated on the CPU/GPU. No wonder people like DigitalFoundry think it's CPU bound... The CPU is technically doing a lot of work there, but it's spending time sending work to the GPU!

Also, weirdly enough, it looks like it's running AMD FSR's frame scaling even though I'm using DLSS? Either way, that looks unintended..

Virtual shadow mapping is again left on no matter the scalability setting which spends several milliseconds on just that... I could go on and on with every effect they leave on no matter the setting.

The game's constantly streaming textures and geometry - straight up devouring memory like it's nothing. Games have always had memory issues, but this one's pushing it to a whole new level.

If you're really interested, you can take a GPU capture with renderdoc and see what your performance counters look like. But I really only recommend it if you're familiar with graphics APIs like D3D12 and you can afford another GPU if it breaks.

I'm just cringing looking at the GPU results right now. Just look at the statistics summary for standing inside Warlock's bar in Zalissya:

Draw calls: 38750 Dispatch calls: 6983 API calls: 314525 API:Draw/Dispatch call ratio: 6.87742 516 Textures - 1201.30 MB (1199.41 MB over 32x32), 131 RTs - 392.68 MB. Avg. tex dimension: 576.242x442.255 (744.426x570.924 over 32x32) 4775 Buffers - 11312.55 MB total 988.00 MB IBs 996.73 MB VBs. 12906.53 MB - Grand total GPU buffer + texture load.

Long story short, I really don't think they invested enough time into a proper scalability/optimization pass in this game. It's like they adjusted the memory usage of each effect according to the settings and called it a day.
Отредактировано Grimm; 5 фев в 0:36
Автор сообщения: Grimm
Micron Technology manufactures my RAM, and here are the latency details:
Memory: 92 ns
  • L1 Cache: 1 ns
  • L2 Cache: 4 ns
  • L3 Cache: 17.4 ns
I know a lot of folks are giving you advice on hardware knowledge but, I don't think your hardware is the issue. I think you have other software or your software/OS configuration is not optimized for gaming. I see a lot of issues concerning low FPS and not much of stuttering.

However, memory timings are also important and if they are not being set correctly you will have bottle necks that can cause stuttering via the CPU. WhiteSnake is asking for part numbers because he can research what your memory is and if there is an issue. With the part numbers of each stick we can figure out your timings, latency and if your memory is a matched pair.

Also, consider turning off virtual memory or the Windows Page File. It is an outdated technology that was based around RAM lack of speed/capacity. By default it is on and can drag down gaming PCs that do not need it.
Отредактировано Vi'ir Jul K'hane; 5 фев в 5:27
Автор сообщения: Vi'ir Jul K'hane
Автор сообщения: Grimm
Micron Technology manufactures my RAM, and here are the latency details:
Memory: 92 ns
  • L1 Cache: 1 ns
  • L2 Cache: 4 ns
  • L3 Cache: 17.4 ns
I know a lot of folks are giving you advice on hardware knowledge but, I don't think your hardware is the issue. I think you have other software or your software/OS configuration is not optimized for gaming. I see a lot of issues concerning low FPS and not much of stuttering.

However, memory timings are also important and if they are not being set correctly you will have bottle necks that can cause stuttering via the CPU. WhiteSnake is asking for part numbers because he can research what your memory is and if there is an issue. With the part numbers of each stick we can figure out your timings, latency and if your memory is a matched pair.

Also, consider turning off virtual memory or the Windows Page File. It is an outdated technology that was based around RAM lack of speed/capacity. By default it is on and can drag down gaming PCs that do not need it.

It's pair of CMK32GX5M2B6400C36
Автор сообщения: Grimm


Micron Technology manufactures my RAM, and here are the latency details:
Memory: 92 ns

Where did you get that number from??? Thats way to high... CMK32GX5M2B6400C36 should be True Latency 11,25ns and Cas Latency 36ns...
I was struggling to get proper results until I turned off AMD's anti-lag setting in my AMD software. After that, weird delays and stutters were gone, and everything worked much better.
Автор сообщения: WhiteSnake76
Автор сообщения: Grimm


Micron Technology manufactures my RAM, and here are the latency details:
Memory: 92 ns

Where did you get that number from??? Thats way to high... CMK32GX5M2B6400C36 should be True Latency 11,25ns and Cas Latency 36ns...

AIDA 64 bench.... Maybe I somehow becnhed wrong way?
< >
Сообщения 115 из 20
Показывать на странице: 1530 50

Дата создания: 22 ноя. 2024 г. в 22:42
Сообщений: 20