Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Actually in the video toward the end, the motion estimation seem to have no impact on performance which I find hard to believe, maybe it's actually the TAA that doesn't cost anything.
https://youtu.be/5jj6nxLQmGg?t=217
[EDIT]
Currently the cost of temporal AA with this method is higher than regular TAA but I don't have numbers, maybe it's not that bad. If someone has the will to figure it out. We need to know what the fixed cost of Motion Estimation + FSR 2.1 /XESS /DLSS would be at every resolution (1080p, 1440p, 4k) to know if it could work already.
I have no doubt someone will come up with a way to do it. But how worth it will it be is a good question. ^^
I made a small mistake in this video: I forgot to use a good upscaling tool, like Lossless Scaling. Instead, I used the video editor's own zooming function to zoom both sides of the screen and this made the right side blurrier than it actually is, as it was technically zoomed more than the left part.
Reshade's performance penalty is not linear. Its frametime based. Basically, at higher FPS or higher resolutions, its impact gets quickly and exponentially higher. So, Motionestimation shader can be seen like it is very heavy on performance depending on your resolution and FPS. Luckily, a little reduction in resolution gives that performance back. In the video, I went down from 4K to 3072x1728, which is 80% of 4K, and regained the performance loss quickly. Based on my observation, the effect takes away as much performance as MSAA 2x or 4x in modern games (I have that number from GTA V and AC: unity, Batman does NOT have MSAA). If, this was an in-game TAA, the performance effect wouldn't be any higher than 5-6%, as TAA generally eats up that much.
I believe, at some point, Lossless Scaling should consider separating into 2 editions: one classical, just like how it is now and the other one with an injector, which hooks into single player games and equalize their FPS while allowing Reshade to be installed on its window. This way Reshade can operate on resolutions higher than the game's canvas and can combine it's TFAA's (temporal AA) power with Lossless Scaling's beautiful upscaling selections. Many people ask for FSR 2 or XeSS here, they are not possible, but what I propose here can be the next best thing.
Edit: spelling.
It "works", and the TAA is a much better implementation than the game's internal one, which in my opinion is so blurry and full of ghosting that it's unusable. Sadly, on my RX6700XT it lowers performance by around 20%, which is just not acceptable. It has many options you can configure but changing most of them results in an even higher performance cost. But the worst part, and what makes it outright unusable when you are actually trying to play a game, is that the UI starts ghosting and artifacting badly whenever something on the screen changes, be it moving the camera, or even scrolling through UI elements will show clear artifacts. And nothing you can configure on the shader itself helps fix that. It's so jarring that I don't see how anyone would be ok using it to actually play a game.
So yeah, like 95% of the stuff you can do with Reshade: it's good for taking screenshots, but since the effect is drawn on top of the UI it creates too many annoying artifacts to be usable while gaming. Until we find a way to inject those kinds of shaders BEFORE the UI of a game is drawn (good luck with that lol), they will just remain as cute curiosities you can use to take screenshots, but not something you will use while playing.
On the other hand, spatial upscalers like RSR or what you get in Lossless Scaling work perfectly well in any situation and don't "break" UI elements.
Obviously the overall goal of your application (as well as other up-scaling solutions) is to increase performance.
But from personal experience, I use this application and other up-scalers primarily in single-player and non-competitive games where I'm trying to get the best visual fidelity without tanking my frame rate. If it's a competitive game where I don't necessarily care all that much about visual fidelity, I'll usually just turn down graphical settings to obtain higher frame rates.
In a single-player or non-competitive game, I am trying to tow the line between best visual fidelity, and what I consider a smooth playable experience (typically 60 FPS). Your application and other up-scalers help me do that.
So, as long as the increased execution time for this type of TAA in ReShade is not hindering the performance and giving us more than diminishing returns, I believe it's still an improvement for the aforementioned types of situations.
Please let me know what you think!
I guess it is about what everyone focuses on the screen. When I play a game I rarely see what is happening in my room or in-game HUD. I immediately get immersed into the "game's world". This is why eliminating pixel crawling (and other types of aliasing) is much more important than UI elements for me.
BTW, Reshade also has a UI Mask shader, which you can mark parts of your screen and those areas will remain untouched by other shaders. Then, if you like to play around, you can make an outlining for those areas and present them as UI design. I actually try make my own custom designs for each game because I mostly increase FOV, whenever it is possible, and then apply a lens correction to game and this results in bent UI.
Here is an example for lens correction and other Reshade effects[imgsli.com]
(apologies for out of topic text)
I thought this might give some possibilities for implementing TAAU, but I checked what information is required for this and it is almost the same set as for FSR 2 and XeSS. Also, Reshade has access to the depth buffer, while LS does not. I don't see what opportunities motion vectors provide for LS atm.
I highly doubt;
**TFAA at lower resolution + FSR 1 (or LS1) through LS
would be any different than an image
**upscaled by LS and then run Motionvectors + TFAA at higher resolution.
Because adding pixel jittering to the rendering process is not possible with neither Reshade nor Lossless Scaling. It actually may have negative impact on performance because TFAA and Motionvectors would run at higher resolution.
I previously proposed that Lossless Scaling should optionally consider a hooking (with DLL) approach and optionally consider operating together with Reshade because the main purpose of Lossless Scaling is to increase performance without sacrificing quality. When Lossless scaling and the game run on the same window their FPS would be tied and the performance would be maximized.
There would be another positive effect with operating together with Reshade:
As we all know FSR, DLSS and XeSS apply themselves at a middle point in the rendering process, before the screen effects, like film grain, chromatic aberration, are rendered. In an hybrid approach, which Reshade and LS work together and Reshade works at higher resolution, such effects would be disabled in game and enabled with Reshade at higher resolutions.
I don't have time to do any video recordings but if someone wants to do some testing it works.
would be any different than an image
**upscaled by LS and then run Motionvectors + TFAA at higher resolution."
Actually it woud be less expensive, because TFAA with the motion estimation cost less at lower resolution.
A version of LS that integrates with Reshade would provide some interesting possibilities.
An FSR 2.1 implementation might be possible if LS made use of Reshade's Addon system, it could then get direct access to the depth buffer. Alternatively, if Crosire updated Reshade to provide a function to accept and display a higher resolution buffer than the input frame from the game, LS could grab the game frame and the motion vector texture --> apply scaling with FSR 2.1 (or LS) --> return upscaled frame to Reshade for display.
A note on FSR 2.1: I am certainly no programmer but browsing the docs the most important information are the motion vectors. The other features for the two *Masks it notes will be auto-generated by FSR if they are not provided. The output may not be as refined as having these masks generated from game data, but it doesn't prevent FSR 2.1 working most of its magic, and certainly the result would be superior to FSR 1.0.
Finally, I've been wanting to suggest closer Reshade integration for a while Dev if only to allow the use of Reshade shaders on the *output scaled frame* that LS produces. Right now, you can for example run a sharpening shader in Reshade on a game which of course runs before LS upscales it when ideally you want a sharpening shader to run after upscaling. For most shaders Reshade provides, this will always be the better option.
My alternative request would to port some of these more advanced sharpening shaders -- like quint_sharp -- to LS. No, the CAS sharpener with LS, while good, doesn't match the more advanced sharpeners by a long shot to bring out details within textures as well as geometry, and some like quint_sharp also use depth data. And again, not to focus -just- on sharpening, there are many other shaders that would benefit from running on the final scaled frame.
Anyway, just some ideas!
Integration of LS and Reshade is nearly impossible. LS will not be able to scale the game image after Reshade in the same window, as this will surely break the game. These are completely different software, each of which works in its own way. None of the devs will ever want to do this hard and ungrateful job. The only cooperation that I see is to capture the game window with the Reshade applied and scale it with LS.
For this whole thing to work the way you want somebody has to make a new Reshade-LS combined software. It should hook a particular game and extract the needed info as Reshade does, then use it for creating a new scaled frame and output it to a new window as LS does. Even so, FSR 2.1 that you talk about won't work because you forgot to mention a very important detail that every temporal upscaler will require - jittered camera samples.