Steam telepítése
belépés
|
nyelv
简体中文 (egyszerűsített kínai)
繁體中文 (hagyományos kínai)
日本語 (japán)
한국어 (koreai)
ไทย (thai)
Български (bolgár)
Čeština (cseh)
Dansk (dán)
Deutsch (német)
English (angol)
Español - España (spanyolországi spanyol)
Español - Latinoamérica (latin-amerikai spanyol)
Ελληνικά (görög)
Français (francia)
Italiano (olasz)
Bahasa Indonesia (indonéz)
Nederlands (holland)
Norsk (norvég)
Polski (lengyel)
Português (portugáliai portugál)
Português - Brasil (brazíliai portugál)
Română (román)
Русский (orosz)
Suomi (finn)
Svenska (svéd)
Türkçe (török)
Tiếng Việt (vietnámi)
Українська (ukrán)
Fordítási probléma jelentése
The results in the video confirm my suspicion that there is no difference between x2, x3 and x4. There wouldn't be, you need the same data for x2, x3 and x4, namely frame A and frame B.
I think the latency difference between them that people perceive is due to x3 usually being used with a lower base-framerate and the larger input-output-disconnect of higher interpolation factors. Your frames/eyes tell you to expect a latency typically associated with 120fps but your brain perceives 40fps input- and present-latency, plus the added FG-latency.
Very cool find!
Though Frame Latency Meter does measure visually from generated mouse-input to detected movement. If SpecialK doesn't measure visually, i reckon it makes FLM more accurate for measuring the latency added by frame-interpolation. For that reason i'd use it to measure changes i'm making to limiting/syncing-related settings.
So;
-SpecialK presumably more accurate for estimated total latency, at least without FG.
-FLM presumably more accurate in measuring the latency added by FG.
FLM isn't that complicated to use, just make sure you keep the FG-mode enabled in FLM regardless of FG actually being on, it stumbled on my end without it.
You could even compare to Presentmon if it interests you. And on top of that there's Reflex on a GSYNC-moduled monitor for the gold standard if you have access.
The biggest issue I have with part A) is that the HDR Details shader it too damned good with RTX HDR+Vibrance and it's only useable in half the games.. without out it I find RTX HDR kind of lacklustre. Where as Special K + ReShade(though more steps are involved in setting it up)has way more accurate customisation. In fact I've had Special K configured HDR work better as far as implementation than some of the games native approaches(soo many games have bad HDR implementations with poor customisation).
Also Special K has so many other amazing features. Being able to enable resizable BAR/add Gsync into games that didn't come with it stock is also very nice.
But yeah SK HDR is insane, after Native HDR, SK HDR and RTX HDR go head to head, no reason not to use SK HDR for basically 0 perf hit + being able to work with LSFG
Yeah as insane as RTX HDR can be when you tweak it just right(I do believe that RTX HDR has a much higher ceiling for better HDR quality than SK but it requires completely different settings in every game and it takes too long to test it out and get it just right) it's just unreliable.. Like SK works all the time and it's easy to configure.
RTX HDR for sure has much higher ceiling for quality. Once you get it perfect for a game it nearly rivals even the best implementations.. but the caveats are huge. A) RTX HDR is wholly unreliable dog sh!t mess because of it's stupid af overlay method of application(Like really Nvidia? Ditch the stupid af overlay already) If it doesn't glitch out and crash it has issues applying in so many games reliably.
B) The performance cost of RTX HDR with appropriate must use shaders(RTX Vibrance and RTX Details)(yes this is actually a shader meant for RTX HDR/Vibrance, the guy who now makes ReShade shaders who used to work for them spilled the beans. Details was basically a configure shader built for RTX HDR/Vibrance.. they had RTX HDR/Vibrance literally 7 years ago but it was never put into release because it never worked properly) is quite literally insane.
I've had proper setup RTX HDR with the shaders look drop dead gorgeous better than some the best native implementations while at the same time costing basically the same as fuqqing Ray Tracing(like what the actual fuq is that).. No joke in this one game I had it looking so good for a bloody near 38% GPU cost on a 3080ti. Like what even is that. That's just not okay. It needs serious efficiency fixes because that's terrible and this number really varies from game to game. Some games it only costs 3-6% and some games it costs nearly 40% of your GPU.. that is so messed up I can't even..
So yeah they originally designed the Details shader for RTX HDR but it was then changed to use with everything because it was a solid shader and could help with auto HDR in games as well. So if you're wondering why now it's an unusable mess all the sudden.. that's why. They changed it back so that it syncs with RTX HDR/Vibrance again LOL.
Lastly C)RTX HDR doesn't work in a lot of games where as with tweaking there isn't a single game I haven't got Special K to work in. There is SOOO many games that RTX HDR either refuses to register in and if it does it refuses to register RTX Vibrance and RTX Details.. and if you can't use RTX Vibrance/Details you might as well not even use it because RTX HDR on it's own doesn't hold a candle to Special K HDR and yeah the biggest seller is obviously the fact that Special K is basically not far behind well done native HDR implementation with the performance hit of Auto HDR(which is in most cases sh!t).
Hm. Actually x4 should have the least latency, and x2 the most. With the same base FPS, x4 generates more intermediate frames and as a result the first of those intermediate frames needs to be shown sooner compared to x2 or x3, resulting in more immediate visual feedback. So less input lag.
And this how it actually feels to me. I use a 240Hz display and with a 58FPS cap (to stay well within g-sync range at 4x), 2x feels laggy, 3x feels slightly less laggy but difficult to tell, and 4x feels definitely less laggy.
116FPS base though with 2x to get 232FPS output feels even better. I would imagine that If I had a 480Hz monitor, 120 base with 4x would probably be even better.
I've since realised the same thing.
I do wonder how much of that can be chalked up to our brain associating a higher-FPS visual with a lower latency. As the difference between 60x2 and 60x4 should theoretically be a mere 4.17ms, or 1 frame at 240hz, if my thinking on this is correct. Which probably nears being unable to identify in a blind test.
Do you agree with the 4.17ms?