Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Is it only so and so many units? Is it a certain weather condition?
But I did not do any tests, maybe it was just a bad constellation of several things hitting performance. A lot of smoke was present on the battlefield too = rendering smoke above foggy weather might lead to a lot of transparency calculations. So "reducing fog" like recommended may help - it generally is a helpful setting to increase performance in my experience.
Smoke and fog really bring down the frame rate a lot reason why rocket arty cooks a PC is the amount of smoke happening at once and probably it simulating fragments.
If you really want to squeeze some extra frames out of the game you could try using a custom DirectX API like DXVK I've tried it on my system gained 10 extra frames and it didn't show signs of glitching or bugging but I would say it is risky as DXVK is designed for linux and not windows so it may or may not work for you.
If the GPU only gets 30fps to render, then it will have a lot less to do than at 60fps or higher. So this is completely normal, most simulation games are CPU limited.
If your GPU would already be at, lets say, 90% when drawing 30fps then this would be bad, because you would have a GPU bottleneck then.
And even in a CPU bottleneck in a typical real-time simulation engine you will only have one core at 90+%. The overall use of the CPU will be much lower, due to many cores not being fully used.
So in your scenario with 40% GPU usage and 30fps: take a look at your CPU core usage. One should be above 85%...
But maybe we will see further parallelization in the future, like calculating terminal ballistics on separate cores or shifting more previously CPU determined processes to the GPU (by utilizing RTX features)? IIRC, then the GT engine already does "non-screen-image processing" (I mean the rendered image for the screen) related calculations on the GPU.
I'd do an performance experiment if i could find a performance tracking program that OBS can record as my current overlays don't show on OBS or steam.
And as I said I have used a custom API with about 10fps boost though this is with very little proper performance tracking and using API specific tracking which can be in accurate with frame timings.
Also "RTX" DXR is basically an very expensive lighting method RTX is just nvidias branding of it and turing cores specifically designed just for lighting calculations which will probably be replaced by Real time lighting which is cheaper. It has nothing to do with the gpu doing non image calculations as Async exists already in DX12 and VULKAN APIs and modern games.
https://steamcommunity.com/sharedfiles/filedetails/?id=2815777145 screenshot using API specific tracking using an DX9 VULKAN hybrid
https://steamcommunity.com/sharedfiles/filedetails/?id=2824334424
https://steamcommunity.com/sharedfiles/filedetails/?id=2824334409
Small Battle in the snow: https://steamcommunity.com/sharedfiles/filedetails/?id=2824336517
It will give me anything between 40 and 70 FPs depending on the situation. Artillery (Rocket) may cause FPS drop still. With 6 BG or so I can also max out my RTX 3070 and will reach approx 80-90 FPS.
You can use it for much more than "lighting calculations". RTX multi-gpu workstations are heavily used for AI and deep learning (they use standard "game" RTX GPUs too). SB Pro (very similar engine) is thinking about maybe utilizing specific "RTX" features in the future too - they are not interested in lighting at all.
Modern games use even core multithreading ashes of singularity for example keep cores evenly loaded with 5~% error and properly use an GPU.
Also 7000mhz VRAM???????
That is the GPU RAM (DDR6). And yes, Mius Front is a game that will run preferably on fewer cores with heavy loads, but it will also distribute among the others properly when that is not possible anymore. It will not give you the same performance while doing so. In Mius Front many things cannot be split between cores with equal effeciency it seems.
But again, it uses multiple cores and also rather well I think.
Just compare very heavy load in Korotich insane battle https://steamcommunity.com/sharedfiles/filedetails/?id=2824338085
with chilling in the snow
https://steamcommunity.com/sharedfiles/filedetails/?id=2824336517
It's clearly not using cores efficiently you have some cores doing nothing or actually probably doing background tasks for windows.
I feel they could get more performance out of it by tweaking the way the game talks to the hardware or even an API change though I've seen some impressive changes done to DX9 APIs to fix performance issues.
The fact the game feels smoother on an modified injected API designed for linux using VULKAN does clearly show there is room for improvement.
Also heres a slight difference between drivers
https://steamcommunity.com/sharedfiles/filedetails/?id=2247869222
https://steamcommunity.com/sharedfiles/filedetails/?id=2247869262
https://steamcommunity.com/sharedfiles/filedetails/?id=2247869200
As you can see driver over head is less eradic on more modern APIs
Its the exact same in the only other similar milsim "SB Pro". Like said, they now try to get more stuff to other cores or the GPU (like terminal ballistics).
Graviteam has also constantly updated the engine for optimization in the background. It actually runs great given the insane detail of the simulation and the number of units in the tactical battles. Zephyrs screenshot shows it (I have to check usage on my PC too):
https://steamcommunity.com/sharedfiles/filedetails/?id=2824338085
And they do simulation related calculations on the GPU too, IIRC.
For the initial GPU complain:
GT will always fully utilize the GPU if needed. So in your "40% GPU usage, 30fps" scenario:
You can set an appropriate higher rendering resolution (super sampling with DSR or the AMD version) and get GT to use 90-100% of your GPU at 30fps while getting a much higher image quality at the same time. But it does not give more fps of course...
I have never had such a high usage % on the second most used core in GT. In your case 75% (first @ 92%). Too bad that you disabled total CPU usage. Would be interesting for comparison.
Can I ask: You turned off hyper-threading globally? Because it shows 8 cores. So turning hyper-threading off (I do not mean the setting in the game but turning it off in bios) is still a thing?