Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
C:\Users\"your name"\AppData\Local\M1\Saved\Config\Windows
I have considered the differences between your GTX 1070 Ti and my GTX 3060, both using the same i7 3770S.
The GTX 1070 Ti supports up to CUDA 11.x.
The RTX 3060 can use CUDA 12.x.
This difference leads to slight variations in DX12 features.
The GTX 1070 Ti likely cannot fully render TFD. Therefore, the amount of information processed by the CPU is reduced.
As a result, your i7 3770S and GTX 1070 Ti seem to be a good match.
The RTX 3060, on the other hand, works to fully render TFD, but the information volume is too much for the i7 3770S.
This is a hypothesis, but this is how I see it.
I am attempting to replicate your environment through settings, but it is not working well.
DLSS:balance
file path
C:\Users\"your name"\AppData\Local\M1\Saved\Config\Windows
## Engine.ini:
r.StaticMesh.UseUnsortedHitProxies=1
r.SkeletalMeshLODBias=1
r.SkeletalMeshLODRadiusScale=0.25
r.StaticMeshLODLevelRatio=0.5
r.ViewDistanceScale=0.8
r.MeshLODRange=0.25
r.SkeletalMeshUseDoublePrecision=0
r.ParallelMeshUpdate=1
r.RHICmdBufferPoolingOpt=1
r.RHICmdBypass=0
r.RHICmdUseParallelAlgorithms=1
r.RHICmdUseDeferredContexts=1
r.RDG.Emulation=0
r.OneFrameThreadLag=1
r.Streaming.PoolSize=2048
r.Streaming.MaxTempMemoryAllowed=256
r.Nanite=1
r.Nanite.MaxPixelsPerEdge=16
r.MaxFlightCount=4
r.RenderThread.MinimalCPUUsage=1
r.RenderThread.MaxUsage=1
## GameUserSettings.ini:
GamepadSensitivity_NormalX=34.000000
GamepadSensitivity_NormalY=34.000000
GamepadSensitivity_ZoomX=34.000000
GamepadSensitivity_ZoomY=34.000000
[ScalabilityGroups]
sg.ResolutionQuality=100
sg.ViewDistanceQuality=2
sg.AntiAliasingQuality=2
sg.ShadowQuality=2
sg.GlobalIlluminationQuality=2
sg.ReflectionQuality=2
sg.PostProcessQuality=2
sg.TextureQuality=2
sg.EffectsQuality=2
sg.FoliageQuality=2
sg.ShadingQuality=2
sg.MeshQuality=0
sg.PhysicsQuality=2
sg.RayTracingQuality=0
r.PostProcessingEnabled=0
0 should make it off. maybe its too much of a graphical hit though up to taste. I would be curious if it helps.
Also, I dont think it matters but you could modify sg.shadowquality from the engine directly by using
r.ShadowQuality=
Good luck
edit: ill append some random ones i know but i dont know if you know / have tried.
Results WILL vary
r.Streaming.PrioritizeBusyTextures=1
textures that are frequently used will stay in memory. could improve performance if card is struggling and a scene changes textures a lot.
r.Threads=2
defaults to 4 you could set it to 2 or 1 if you are cpu bound. Results WILL vary. Probably only bother with this if you really need to.
r.HZBOcclusion=1
default of 0. Setting to 1 or 2 can potentially improve occlusion culling. It might not play nicely depends on game usually.
r.SkinCache.CompileShaders=1
compiles skinned meshes into shaders should help but will make your loading take a bit longer i'd guiess.
r.Particles.MaxLOD=0
higher = more Lod levels like every other setting, so setting it low should make the particles very simple (maybe needs to be 1. 0 is default)
I myself played for a few hours to find out. This setting improved CPU utilization by a few percentage points at FPS 30 with vsync on.