Установить Steam
войти
|
язык
简体中文 (упрощенный китайский)
繁體中文 (традиционный китайский)
日本語 (японский)
한국어 (корейский)
ไทย (тайский)
Български (болгарский)
Čeština (чешский)
Dansk (датский)
Deutsch (немецкий)
English (английский)
Español - España (испанский)
Español - Latinoamérica (латиноам. испанский)
Ελληνικά (греческий)
Français (французский)
Italiano (итальянский)
Bahasa Indonesia (индонезийский)
Magyar (венгерский)
Nederlands (нидерландский)
Norsk (норвежский)
Polski (польский)
Português (португальский)
Português-Brasil (бразильский португальский)
Română (румынский)
Suomi (финский)
Svenska (шведский)
Türkçe (турецкий)
Tiếng Việt (вьетнамский)
Українська (украинский)
Сообщить о проблеме с переводом
Other than getting a new GPU with more VRAM (ideally >20gb), you could try checking if your motherboard is nerfing your CPU memory clock speeds. But I could be entirely wrong about that for your PC. You can check with AIDA64 Extreme Trial version and running the memory benchmark. Your RAM from a technical standpoint should be able to read around 48,000 MB/s . OR, you can check your bios settings to check your memory clock speed there.
I found out my motherboard was nerfing my memory clock speed and increasing it in the BIOS to spec doubles my framerate.
I ran the AIDA64 memory benchmark, and here are my results:
(Not) Thanks for promoting your FPS improvement guide
The problem here isn’t just low FPS—it’s the lack of any scaling based on graphics settings. Please stay on topic.
So I thought memory speed might be the issue, but nah - my read speed's hitting around 28,000 MB/s and I'm getting <20ms in most areas now with some occasional stutter, so that's not the bottleneck.
Instead of my speculating of what the issue could be, I'll share some interesting things I've found while analyzing GPU captures at different scalability settings.
After messing around with some GPU captures in Zalissya (easily the most demanding spot in the lesser zone), I noticed something interesting. Even with graphics on low and resolution scaled down to 66%, this beast still eats up 12-13 GB of VRAM. Pretty wild. This will cause constant paging for my card with 12gb of VRAM.
Even on low settings, the game keeps those clouds and fog effects running. Takes about 2.5ms on my 3080 Ti 12GB to handle that. Not a huge deal since it's probably running alongside other GPU stuff, but it might be rough on other cards. Plus, you know how GPU drivers can be.
The foliage is another story. Game doesn't care what settings you're on - those trees and plants are staying. Sure, there's distance culling, but there's so much vegetation that your GPU's still putting in work.
It's doing a bunch of radiance caching for areas you don't even see.
It's fetching virtual textures you don't see.
It's drawing a visibility buffer for nanite.
It's doing expensive GI on everything no matter the setting (could be ray tracing? I haven't figure that out yet, it could just be fetching pre-rendered volumes).
Every NPC is getting skinned/drawn if you look in their general direction, even if you can't see them behind a wall. All 1M+ vertices of them.
Materials are all over the place with hundreds and thousands of different pipelines being orchestrated on the CPU/GPU. No wonder people like DigitalFoundry think it's CPU bound... The CPU is technically doing a lot of work there, but it's spending time sending work to the GPU!
Also, weirdly enough, it looks like it's running AMD FSR's frame scaling even though I'm using DLSS? Either way, that looks unintended..
Virtual shadow mapping is again left on no matter the scalability setting which spends several milliseconds on just that... I could go on and on with every effect they leave on no matter the setting.
The game's constantly streaming textures and geometry - straight up devouring memory like it's nothing. Games have always had memory issues, but this one's pushing it to a whole new level.
If you're really interested, you can take a GPU capture with renderdoc and see what your performance counters look like. But I really only recommend it if you're familiar with graphics APIs like D3D12 and you can afford another GPU if it breaks.
I'm just cringing looking at the GPU results right now. Just look at the statistics summary for standing inside Warlock's bar in Zalissya:
Long story short, I really don't think they invested enough time into a proper scalability/optimization pass in this game. It's like they adjusted the memory usage of each effect according to the settings and called it a day.
Big stutters are generally a sign of CPU usage and when you go there you go beyond just the game and hardware; you get into Windows configurations.
To get the most out of Stalker 2 you need as fast of a CPU as you can get. (preferably a 9800x3d) and a video card with a minimum of 12GB of VRAM. This is almost regardless of the settings you play at. Even at 1080p, 8GB VRAM just doesn't seem to cut it (though you MIGHT be able to get it to work sort of OK but turning down some VRAM intensive settings)
As far as the CPU limit goes, settings aren't going to make a difference. There is a maximum frame rate you are going to get with any given CPU, and no matter what settings you change, that won't change. You can try enabling frame gen, and while you might get higher framerate on screen, tit will still feel laggy like playing a 50fps game.
Just look at this roundup of CPU benchmarks the german site PC Games Hardware did:
https://www.pcgameshardware.de/Stalker-2-Spiel-34596/Specials/Release-Test-Review-Steam-Benchmarks-Day-0-Patch-1459736/6/
The fastest tested CPU is the 9800x3d, and even it only achieves a 98.2FPS average. That would be pretty playable if not for the miserably low 0.2 and 0.1% minimums, meaning that even on the best gaming CPU out there right now, there are going to be stutters, and that is unavoidable unless they patch something.
Your 13700k achieves about a 81.2FPS average, which also normally would be OK, but look at those minimums. Yikes! You are going to see frame time spikes and stutters.
If I understand their methodology correctly (sorry, my German is a little rusty, it has been many years) for their 0.2% and 0.1% lows, they are measuring the worst 0.2% and 0.1% frame times and then expressing those in FPS (by doing 1/frame time in seconds)
So, for the 13700K they are saying 0.2% lows are 39fps and 0.1% lows are 33fps. This is equivalent of max frame times of 25.6ms and 30.3ms respectively.
The performance you are seeing might just be the best you can get with a 13700k. But don't feel too badly. No one is getting much better results. Even whose who have a 9800x3d with an average of 99.4FPS, their minimums are 51fps and 42fps respectively. This is 19.6ms and 23.4ms respectively.
I am not quite sure what Stalker2 is doing differently from just about every game out there (except maybe Starfield, which also had pretty bad CPU performance, but not this bad) but the truth is, no CPU on the market today gets what I would consider "truly playable" framerates. In other words, 0.1% minimums of at least 60fps (or maximum frame times of 16.67ms)
Don't get me wrong. I would prefer more than that, but that is what I consider absolute minimum for me to be happy, and nothing out there can do it right now.
They have been talking about using Nvidias DLSS frame gen to try to make up for this, and sure, it makes the screen rendering LOOK smoother, but you are still going to have laggy mouse response when you hit those frametime spikes.
I'm hopeful that over time, patching will help to improve the CPU performance in this title, because as is, this title is worse in this regard than any other I have ever seen. And don't get me wrong. I'm OK with a game being heavy on hardware, if I am getting something for that extra load. But looking at Stalker2 - other than the awesome Stalker-universe vibe I have wanted to revisit for 15 years - I don't see anything in this title that should justify that crazy CPU load. Other than maybe poorly optimized level design.
"Poor optimization" is usually the battle-cry of whiny kids with obsolete hardware who don't want to face the reality that their hardware is way past its prime, but in the case of Stalker2 it may actually be true.
My leading theory is that maybe - since the team was new to Unreal Engine - they didn't realize how closely you have to monitor to monitor your total draw call count, and are just spamming excessive draw calls. That would have this kind of effect on CPU load. I HOPE that isn't the case though, because if it is, it is it isn't a quick fix. It would require redoing the artwork of every single scene in the game, optimizing it for draw calls. I've never done this, but it sounds HIGHLY labor intensive to me. (unless you can figure out a way to have AI do it for you.)
So, yeah, I'm hoping I'm wrong on that one. But don't take my comments here as any kind of authority on the subject. I know my hardware, but when it comes to the software driver stack, API's and game engine interactions I am quite the layman.
That doesnt mean much as Corsair is know to use difrent chips for exactly the same memory modules (same model/type/part number)
And since you did not provide Latency or a Partnumber the info is even more useless.
And Intel doesnt like Unreal Engine verry much in case no one already mentioned that.
Micron Technology manufactures my RAM, and here are the latency details:
Memory: 92 ns
First of all, you seem to be missing the main point of my post. I’m not complaining about performance in general—I’m pointing out that graphics settings have zero impact on performance, even when I downgrade everything to a blurry mess. No matter what settings I use, the FPS (50–60) and frame times (~20ms) remain nearly identical, which suggests the game isn’t properly scaling its workload based on settings.
I understand that Stalker 2 is extremely demanding on both the CPU and VRAM, and the benchmarks you linked show that even top-tier hardware struggles to maintain smooth frame times. However, the issue I’m highlighting isn’t just that the game is demanding—it’s that adjusting settings doesn’t reduce the workload. Normally, lowering graphical settings should improve performance by reducing the complexity of effects and rendering processes, but in this case, the GPU remains at 97% load no matter what. That’s not just a CPU bottleneck—it suggests that certain rendering tasks aren’t being properly disabled or scaled down when settings are changed.
I don’t disagree that Stalker 2 is pushing hardware limits, and I get that even the best CPUs aren’t hitting ideal minimum frame times. But this isn’t just a case of a demanding game exposing system limitations—it looks like the settings aren’t properly adjusting workload distribution. That’s why the GPU is constantly maxed out and why enabling FSR 3 results in unplayable stutters. If the engine were working correctly, lowering settings should at least lessen the strain, but that’s simply not happening.
Steve’s analysis of GPU captures reinforces this. Even with low settings and resolution scaled down, the game still eats 12–13GB of VRAM, continues rendering fog, clouds, and radiance caching, and processes visibility buffers, virtual textures, and expensive global illumination as if it were running on higher settings. Foliage density doesn’t seem to change meaningfully, and NPCs are still being skinned and drawn even when not visible. It’s not just that the game is CPU-intensive; it’s that it keeps sending massive workloads to the GPU regardless of settings. This suggests that the game either lacks a proper optimization pass for scalability or that the settings sliders don’t actually disable the most expensive rendering tasks.
However, memory timings are also important and if they are not being set correctly you will have bottle necks that can cause stuttering via the CPU. WhiteSnake is asking for part numbers because he can research what your memory is and if there is an issue. With the part numbers of each stick we can figure out your timings, latency and if your memory is a matched pair.
Also, consider turning off virtual memory or the Windows Page File. It is an outdated technology that was based around RAM lack of speed/capacity. By default it is on and can drag down gaming PCs that do not need it.
It's pair of CMK32GX5M2B6400C36
Where did you get that number from??? Thats way to high... CMK32GX5M2B6400C36 should be True Latency 11,25ns and Cas Latency 36ns...
AIDA 64 bench.... Maybe I somehow becnhed wrong way?